Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division
2009-06-09
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the
Monte Carlo methods on advanced computer architectures
Martin, W.R.
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
CARLOS: Computer-Assisted Instruction in Spanish at Dartmouth College.
ERIC Educational Resources Information Center
Turner, Ronald C.
The computer-assisted instruction project in review Spanish, Computer-Assisted Review Lessons on Syntax (CARLOS), initiated at Dartmouth College in 1967-68, is described here. Tables are provided showing the results of the experiment on the basis of aptitude and achievement tests, and the procedure for implementing CARLOS as well as its place in…
A Monte Carlo photocurrent/photoemission computer program
NASA Technical Reports Server (NTRS)
Chadsey, W. L.; Ragona, C.
1972-01-01
A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.
Quantum Monte Carlo Endstation for Petascale Computing
David Ceperley
2011-03-02
CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular
de Finetti Priors using Markov chain Monte Carlo computations.
Bacallado, Sergio; Diaconis, Persi; Holmes, Susan
2015-07-01
Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.
de Finetti Priors using Markov chain Monte Carlo computations
Bacallado, Sergio; Diaconis, Persi; Holmes, Susan
2015-01-01
Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
Monte Carlo Computer Simulation of a Rainbow.
ERIC Educational Resources Information Center
Olson, Donald; And Others
1990-01-01
Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)
CMS Monte Carlo production operations in a distributed computing environment
Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab
2008-01-01
Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.
Monte Carlo simulation by computer for life-cycle costing
NASA Technical Reports Server (NTRS)
Gralow, F. H.; Larson, W. J.
1969-01-01
Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
Incorporation of Monte-Carlo Computer Techniques into Science and Mathematics Education.
ERIC Educational Resources Information Center
Danesh, Iraj
1987-01-01
Described is a Monte-Carlo method for modeling physical systems with a computer. Also discussed are ways to incorporate Monte-Carlo simulation techniques for introductory science and mathematics teaching and also for enriching computer and simulation courses. (RH)
astroABC: Approximate Bayesian Computation Sequential Monte Carlo sampler
NASA Astrophysics Data System (ADS)
Jennings, Elise
2017-05-01
astroABC is a Python implementation of an Approximate Bayesian Computation Sequential Monte Carlo (ABC SMC) sampler for parameter estimation. astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. It has the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Monte Carlo calculation of patient organ doses from computed tomography.
Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi
2014-01-01
In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.
Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Monte Carlo Computational Modeling of Atomic Oxygen Interactions
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.
2017-01-01
Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model
Image based Monte Carlo Modeling for Computational Phantom
NASA Astrophysics Data System (ADS)
Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican
2014-06-01
The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.
Forward Monte Carlo Computations of Polarized Microwave Radiation
NASA Technical Reports Server (NTRS)
Battaglia, A.; Kummerow, C.
2000-01-01
Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
Viveros-Méndez, P. X. Aranda-Espinoza, S.
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.
Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments
NASA Astrophysics Data System (ADS)
Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob
2014-06-01
The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.
Dose spread functions in computed tomography: A Monte Carlo study
Boone, John M.
2009-10-15
Purpose: Current CT dosimetry employing CTDI methodology has come under fire in recent years, partially in response to the increasing width of collimated x-ray fields in modern CT scanners. This study was conducted to provide a better understanding of the radiation dose distributions in CT. Methods: Monte Carlo simulations were used to evaluate radiation dose distributions along the z axis arising from CT imaging in cylindrical phantoms. Mathematical cylinders were simulated with compositions of water, polymethyl methacrylate (PMMA), and polyethylene. Cylinder diameters from 10 to 50 cm were studied. X-ray spectra typical of several CT manufacturers (80, 100, 120, and 140 kVp) were used. In addition to no bow tie filter, the head and body bow tie filters from modern General Electric and Siemens CT scanners were evaluated. Each cylinder was divided into three concentric regions of equal volume such that the energy deposited is proportional to dose for each region. Two additional dose assessment regions, central and edge locations 10 mm in diameter, were included for comparisons to CTDI{sub 100} measurements. Dose spread functions (DSFs) were computed for a wide number of imaging parameters. Results: DSFs generally exhibit a biexponential falloff from the z=0 position. For a very narrow primary beam input (<<1 mm), DSFs demonstrated significant low amplitude long range scatter dose tails. For body imaging conditions (30 cm diameter in water), the DSF at the center showed {approx}160 mm at full width at tenth maximum (FWTM), while at the edge the FWTM was {approx}80 mm. Polyethylene phantoms exhibited wider DSFs than PMMA or water, as did higher tube voltages in any material. The FWTM were 80, 180, and 250 mm for 10, 30, and 50 cm phantom diameters, respectively, at the center in water at 120 kVp with a typical body bow tie filter. Scatter to primary dose ratios (SPRs) increased with phantom diameter from 4 at the center (1 cm diameter) for a 16 cm diameter cylinder
Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
NASA Astrophysics Data System (ADS)
Decker, K. M.; Jayewardena, C.; Rehmann, R.
We describe the library lgtlib, and lgttool, the corresponding development environment for Monte Carlo simulations of lattice gauge theory on multiprocessor vector computers with shared memory. We explain why distributed memory parallel processor (DMPP) architectures are particularly appealing for compute-intensive scientific applications, and introduce the design of a general application and program development environment system for scientific applications on DMPP architectures.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency
Chen, Jin; Intes, Xavier
2011-01-01
Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393
An Overview of the NCC Spray/Monte-Carlo-PDF Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2000-01-01
This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
CloudMC: a cloud computing application for Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Miras, H.; Jiménez, R.; Miras, C.; Gomà, C.
2013-04-01
This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Adding computationally efficient realism to Monte Carlo turbulence simulation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Alerstam, Erik; Svensson, Tomas; Andersson-Engels, Stefan
2008-01-01
General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.
Monte Carlo radiative heat transfer simulation on a reconfigurable computer
Gokhale, M.; Ahrens, C. M.; Frigo, J.; Minnich, R. G.; Tripp J. L.
2004-01-01
Recently, the appearance of very large (3-10M gate) FPGAs with embedded arithmetic units has opened the door to the possibility of floating point computation on these devices. While previous researchers have described peak performance or kernel matrix operations, there is as yet little experience with mapping an application-specific floating point pipeline onto FPGAs. In this work, we port a supercomputer application benchmark onto Xilinx Virtex II and II Pro FPGAs and compare performance with comparable microprocessor implementation. Our results show that this application-specific pipeline, with 12 multiply, 10 add/subtract, one divide, and two compare modules of single precision floating point data type, shows speedup of 1.6x-1.7x. We analyze the trade-offs between hardware and software 'sweet spots' to characterize the algorithms that will perform well on current and future FPGA architectures.
Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation
Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.
2004-11-15
In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms
NASA Astrophysics Data System (ADS)
Rondeau, Maxime; Isnard, L.; Arès, R.
2017-07-01
This paper presents an approach to simulate the free molecular flow in vacuum systems by using a Monte Carlo method for solving the Boltzmann particle transport equation with no intermolecular collisions. Sometimes referred to as a point-to-source Monte Carlo path tracing in image rendering, in this paper the name is borrowed from the thermal radiation and heat transfer field, reverse Monte Carlo path tracing. It is shown that this method provides better accuracy and stability when computing the Clausing function when compared to the standard test particle Monte Carlo method used for free molecular flow. The Clausing function leads to the distribution function of positions and velocities from which the particle density map, pressure gradient, energy flux, and other local quantities can be computed. Using reverse path tracing, the particle concentration in a conical segment is computed, and the maximal flow input is determined by calculating the mean free path at the maximum density position.
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo
1998-01-01
A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Comparing variational Bayes with Markov chain Monte Carlo for Bayesian computation in neuroimaging.
Nathoo, F S; Lesperance, M L; Lawson, A B; Dean, C B
2013-08-01
In this article, we consider methods for Bayesian computation within the context of brain imaging studies. In such studies, the complexity of the resulting data often necessitates the use of sophisticated statistical models; however, the large size of these data can pose significant challenges for model fitting. We focus specifically on the neuroelectromagnetic inverse problem in electroencephalography, which involves estimating the neural activity within the brain from electrode-level data measured across the scalp. The relationship between the observed scalp-level data and the unobserved neural activity can be represented through an underdetermined dynamic linear model, and we discuss Bayesian computation for such models, where parameters represent the unknown neural sources of interest. We review the inverse problem and discuss variational approximations for fitting hierarchical models in this context. While variational methods have been widely adopted for model fitting in neuroimaging, they have received very little attention in the statistical literature, where Markov chain Monte Carlo is often used. We derive variational approximations for fitting two models: a simple distributed source model and a more complex spatiotemporal mixture model. We compare the approximations to Markov chain Monte Carlo using both synthetic data as well as through the analysis of a real electroencephalography dataset examining the evoked response related to face perception. The computational advantages of the variational method are demonstrated and the accuracy associated with the resulting approximations are clarified.
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Chow, J
2015-06-15
Purpose: This study evaluated the efficiency of 4D lung radiation treatment planning using Monte Carlo simulation on the cloud. The EGSnrc Monte Carlo code was used in dose calculation on the 4D-CT image set. Methods: 4D lung radiation treatment plan was created by the DOSCTP linked to the cloud, based on the Amazon elastic compute cloud platform. Dose calculation was carried out by Monte Carlo simulation on the 4D-CT image set on the cloud, and results were sent to the FFD4D image deformation program for dose reconstruction. The dependence of computing time for treatment plan on the number of compute node was optimized with variations of the number of CT image set in the breathing cycle and dose reconstruction time of the FFD4D. Results: It is found that the dependence of computing time on the number of compute node was affected by the diminishing return of the number of node used in Monte Carlo simulation. Moreover, the performance of the 4D treatment planning could be optimized by using smaller than 10 compute nodes on the cloud. The effects of the number of image set and dose reconstruction time on the dependence of computing time on the number of node were not significant, as more than 15 compute nodes were used in Monte Carlo simulations. Conclusion: The issue of long computing time in 4D treatment plan, requiring Monte Carlo dose calculations in all CT image sets in the breathing cycle, can be solved using the cloud computing technology. It is concluded that the optimized number of compute node selected in simulation should be between 5 and 15, as the dependence of computing time on the number of node is significant.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
Carney, J.H.; Gardner, R.H.; Mankin, J.B.; O'Neill, R.V.
1981-03-01
The effect of uncertainties in ecological models can be systematically studied by Monte Carlo techniques to obtain the uncertainty of model predictions. The Monte Carlo procedure requires a program which generates random parameter values and obtains numerical solutions. This report documents the general procedures used for the Monte Carlo error analysis of a stream model, along with the computer programs and subroutines that have been developed to simplify this task. An example of the results is provided in the appendices with sufficient information given to adapt the methods to other models.
Zou Yu; Kavousanakis, Michail E.; Kevrekidis, Ioannis G.; Fox, Rodney O.
2010-07-20
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.
Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques.
Kupinski, Matthew A; Hoppin, John W; Clarkson, Eric; Barrett, Harrison H
2003-03-01
The ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer's relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation.
Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review
Paquet, Eric; Viktor, Herna L.
2015-01-01
Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-08-31
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Lai, Chao-Jen Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2015-06-15
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2015-01-01
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm2 field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called ;Likelihood free; as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted
A Monte Carlo tool for raster-scanning particle therapy dose computation
NASA Astrophysics Data System (ADS)
Jelen, U.; Radon, M.; Santiago, A.; Wittig, A.; Ammazzalorso, F.
2014-03-01
Purpose of this work was to implement Monte Carlo (MC) dose computation in realistic patient geometries with raster-scanning, the most advanced ion beam delivery technique, combining magnetic beam deflection with energy variation. FLUKA, a Monte Carlo package well-established in particle therapy applications, was extended to simulate raster-scanning delivery with clinical data, unavailable as built-in feature. A new complex beam source, compatible with FLUKA public programming interface, was implemented in Fortran to model the specific properties of raster-scanning, i.e. delivery by means of multiple spot sources with variable spatial distributions, energies and numbers of particles. The source was plugged into the MC engine through the user hook system provided by FLUKA. Additionally, routines were provided to populate the beam source with treatment plan data, stored as DICOM RTPlan or TRiP98's RST format, enabling MC recomputation of clinical plans. Finally, facilities were integrated to read computerised tomography (CT) data into FLUKA. The tool was used to recompute two representative carbon ion treatment plans, a skull base and a prostate case, prepared with analytical dose calculation (TRiP98). Selected, clinically relevant issues influencing the dose distributions were investigated: (1) presence of positioning errors, (2) influence of fiducial markers and (3) variations in pencil beam width. Notable differences in modelling of these challenging situations were observed between the analytical and Monte Carlo results. In conclusion, a tool was developed, to support particle therapy research and treatment, when high precision MC calculations are required, e.g. in presence of severe density heterogeneities or in quality assurance procedures.
Monte Carlo assessment of computed tomography dose to tissue adjacent to the scanned volume.
Boone, J M; Cooper, V N; Nemzek, W R; McGahan, J P; Seibert, J A
2000-10-01
The assessment of the radiation dose to internal organs or to an embryo or fetus is required on occasion for risk assessment or for comparing imaging studies. Limited resources hinder the ability to accurately assess the radiation dose received to locations outside the tissue volume actually scanned during computed tomography (CT). The purpose of this study was to assess peripheral doses and provide tabular data for dose evaluation. Validated Monte Carlo simulation techniques were used to compute the dose distribution along the length of water-equivalent cylindrical phantoms, 16 and 32 cm in diameter. For further validation, comparisons between physically measured and Monte Carlo-derived air kerma profiles were performed and showed excellent (1% to 2%) agreement. Polyenergetic x-ray spectra at 80, 100, 120, and 140 kVp with beam shaping filters were studied. Using 10(8) simulated photons input to the cylinders perpendicular to their long axis, line spread functions (LSF) of the dose distribution were determined at three depths in the cylinders (center, mid-depth, and surface). The LSF data were then used with appropriate mathematics to compute dose distributions along the long axis of the cylinder. The dose distributions resulting from helical (pitch = 1.0) scans and axial scans were approximately equivalent. Beyond about 3 cm from the edge of the CT scanned tissue volume, the fall-off of radiation dose was exponential. A series of tables normalized at 100 milliampere seconds (mAs) were produced which allow the straight-forward assessment of dose within and peripheral to the CT scanned volume. The tables should be useful for medical physicists and radiologists in the estimation of dose to sites beyond the edge of the CT scanned volume.
Region-oriented CT image representation for reducing computing time of Monte Carlo simulations
Sarrut, David; Guigues, Laurent
2008-04-15
Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy.
NASA Astrophysics Data System (ADS)
Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.
2012-03-01
Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.
Uncertainty quantification through the Monte Carlo method in a cloud computing setting
NASA Astrophysics Data System (ADS)
Cunha, Americo; Nasser, Rafael; Sampaio, Rubens; Lopes, Hélio; Breitman, Karin
2014-05-01
The Monte Carlo (MC) method is the most common technique used for uncertainty quantification, due to its simplicity and good statistical results. However, its computational cost is extremely high, and, in many cases, prohibitive. Fortunately, the MC algorithm is easily parallelizable, which allows its use in simulations where the computation of a single realization is very costly. This work presents a methodology for the parallelization of the MC method, in the context of cloud computing. This strategy is based on the MapReduce paradigm, and allows an efficient distribution of tasks in the cloud. This methodology is illustrated on a problem of structural dynamics that is subject to uncertainties. The results show that the technique is capable of producing good results concerning statistical moments of low order. It is shown that even a simple problem may require many realizations for convergence of histograms, which makes the cloud computing strategy very attractive (due to its high scalability capacity and low-cost). Additionally, the results regarding the time of processing and storage space usage allow one to qualify this new methodology as a solution for simulations that require a number of MC realizations beyond the standard.
NASA Astrophysics Data System (ADS)
Cleveland, Mathew A.; Palmer, Todd S.
2013-09-01
Thermal heating from radiative heat transfer can have a significant effect on combustion systems. A variety of models have been developed to represent the strongly varying opacities found in combustion gases (Goutiere et al., 2000). This work evaluates the computational efficiency and load balance issues associated with two opacity models implemented in a 3D parallel Monte Carlo solver: the spectral-line-based weighted sum of gray gases (SLW) (Denison and Webb, 1993) and the spectral line-by-line (LBL) (Wang and Modest, 2007) opacity models. The parallel performance of the opacity models is evaluated using the Su and Olson (1999) frequency-dependent semi-analytic benchmark problem. Weak scaling, strong scaling, and history scaling studies were performed and comparisons were made for each opacity model. Comparisons of load balance sensitivities to these types of scaling were also evaluated. It was found that the SLW model has some attributes that might be valuable in a select set of parallel problems.
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E.
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different
Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations
Hendricks, J.S.; Brockhoff, R.C. . Applied Theoretical Physics Division)
1994-04-01
The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and the Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Pediatric personalized CT-dosimetry Monte Carlo simulations, using computational phantoms
NASA Astrophysics Data System (ADS)
Papadimitroulas, P.; Kagadis, G. C.; Ploussi, A.; Kordolaimi, S.; Papamichail, D.; Karavasilis, E.; Syrgiamiotis, V.; Loudos, G.
2015-09-01
The last 40 years Monte Carlo (MC) simulations serve as a “gold standard” tool for a wide range of applications in the field of medical physics and tend to be essential in daily clinical practice. Regarding diagnostic imaging applications, such as computed tomography (CT), the assessment of deposited energy is of high interest, so as to better analyze the risks and the benefits of the procedure. The last few years a big effort is done towards personalized dosimetry, especially in pediatric applications. In the present study the GATE toolkit was used and computational pediatric phantoms have been modeled for the assessment of CT examinations dosimetry. The pediatric models used come from the XCAT and IT'IS series. The X-ray spectrum of a Brightspeed CT scanner was simulated and validated with experimental data. Specifically, a DCT-10 ionization chamber was irradiated twice using 120 kVp with 100 mAs and 200 mAs, for 1 sec in 1 central axial slice (thickness = 10mm). The absorbed dose was measured in air resulting in differences lower than 4% between the experimental and simulated data. The simulations were acquired using ∼1010 number of primaries in order to achieve low statistical uncertainties. Dose maps were also saved for quantification of the absorbed dose in several children critical organs during CT acquisition.
Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.
Edwards, Stephen; Schick, Daniel
2016-04-01
Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.
Monte Carlo tolerancing tool using nonsequential ray tracing on a computer cluster
NASA Astrophysics Data System (ADS)
Reimer, Christopher
2010-08-01
The development of a flexible tolerancing tool for illumination systems based on Matlab® and Zemax® is described in this paper. Two computationally intensive techniques are combined, Monte Carlo tolerancing and non-sequential ray tracing. Implementation of the tool on a computer cluster allows for relatively rapid tolerancing. This paper explores the tool structure, describing the splitting the task of tolerancing between Zemax and Matlab. An equation is derived that determines the number of simulated ray traces needed to accurately resolve illumination uniformity. Two examples of tolerancing illuminators are given. The first one is a projection system consisting of a pico-DLP, a light pipe, a TIR prism and the critical illumination relay optics. The second is a wide band, high performance Köhler illuminator, which includes a modified molded LED as the light source. As high performance illumination systems evolve, the practice of applying standard workshop tolerances to these systems may need to be re-examined.
Method for Performing an Efficient Monte Carlo Simulation of Lipid Mixtures on a Concurrent Computer
NASA Astrophysics Data System (ADS)
Moore, Andrew; Huang, Juyang; Gibson, Thomas
2003-10-01
We are interested in performing extensive Monte Carlo simulations of lipid mixtures in cell membranes. These computations will be performed on a Gnu/Linux Beowulf cluster using the industry-standard Message Passing Interface (MPI) for handling node-to-node communication and overall program management. Devising an efficient parallel decomposition of the simulation is crucial for success. The goal is to balance the load on the compute nodes so that each does the same amount of work and to minimize the amount of (relatively slow) node-to-node communication. To this end, we report a method for performing simulations on a boundless three-dimensional surface. The surface is modeled by a two-dimensional array which can represent either a rectangular or triangular lattice. The array is distributed evenly across multiple processors in a block-row configuration. The sequence of calculations minimizes the delay from passing messages between nodes and uses the delay that does exist to perform local operations on each node.
Online object oriented Monte Carlo computational tool for the needs of biomedical optics
Doronin, Alexander; Meglinski, Igor
2011-01-01
Conceptual engineering design and optimization of laser-based imaging techniques and optical diagnostic systems used in the field of biomedical optics requires a clear understanding of the light-tissue interaction and peculiarities of localization of the detected optical radiation within the medium. The description of photon migration within the turbid tissue-like media is based on the concept of radiative transfer that forms a basis of Monte Carlo (MC) modeling. An opportunity of direct simulation of influence of structural variations of biological tissues on the probing light makes MC a primary tool for biomedical optics and optical engineering. Due to the diversity of optical modalities utilizing different properties of light and mechanisms of light-tissue interactions a new MC code is typically required to be developed for the particular diagnostic application. In current paper introducing an object oriented concept of MC modeling and utilizing modern web applications we present the generalized online computational tool suitable for the major applications in biophotonics. The computation is supported by NVIDEA CUDA Graphics Processing Unit providing acceleration of modeling up to 340 times. PMID:21991540
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-08-23
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.
A Monte Carlo Code to Compute Energy Fluxes in Cometary Nuclei
NASA Astrophysics Data System (ADS)
Moreno, F.; Muñoz, O.; López-Moreno, J. J.; Molina, A.; Ortiz, J. L.
2002-04-01
A Monte Carlo model designed to compute both the input and output radiation fields from spherical-shell cometary atmospheres has been developed. The code is an improved version of that by H. Salo (1988, Icarus76, 253-269); it includes the computation of the full Stokes vector and can compute both the input fluxes impinging on the nucleus surface and the output radiation. This will have specific applications for the near-nucleus photometry, polarimetry, and imaging data collection planned in the near future from space probes. After carrying out some validation tests of the code, we consider here the effects of including the full 4×4 scattering matrix in the calculations of the radiative flux impinging on cometary nuclei. As input to the code we used realistic phase matrices derived by fitting the observed behavior of the linear polarization as a function of phase angle. The observed single scattering linear polarization phase curves of comets are fairly well represented by a mixture of magnesium-rich olivine particles and small carbonaceous particles. The input matrix of the code is thus given by the phase matrix for olivine as obtained in the laboratory plus a variable scattering fraction phase matrix for absorbing carbonaceous particles. These fractions are 3.5% for Comet Halley and 6% for Comet Hale-Bopp, the comet with the highest percentage of all those observed. The errors in the total input flux impinging on the nucleus surface caused by neglecting polarization are found to be within 10% for the full range of solar zenith angles. Additional tests on the resulting linear polarization of the light emerging from cometary nuclei in near-nucleus observation conditions at a variety of coma optical thicknesses show that the polarization phase curves do not experience any significant changes for optical thicknesses τ≳0.25 and Halley-like surface albedo, except near 90° phase angle.
Ali, Fawaz; Waller, Ed
2014-10-01
There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations.
Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald
2013-01-01
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for
Ramos-Mendez, Jose; Perl, Joseph; Faddegon, Bruce; Schuemann, Jan; Paganetti, Harald
2013-04-15
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations
Computation of electron diode characteristics by monte carlo method including effect of collisions.
NASA Technical Reports Server (NTRS)
Goldstein, C. M.
1964-01-01
Consistent field Monte Carlo method calculation for collision effect on electron-ion diode characteristics and for hard sphere electron- neutral collision effect for monoenergetic- thermionic emission
Quantum Monte Carlo Computations for Equations of State, Phase Transitions, and Elasticity of Silica
NASA Astrophysics Data System (ADS)
Cohen, R. E.; Militzer, B.; Wu, Z.; Driver, K.; Rios, P. L.; Towler, M.; Needs, R.
2007-12-01
We have performed Quantum Monte Carlo (QMC) computations for silica in the quartz, stishovite, and α- PbO2 structures as functions of compression as benchmark computations. QMC uses no approximate density functional, and the many-body, correlated, Schrödinger equation is effectively solved stochastically. In spite of the great success of DFT there are still some fundamental problems that need improvement. First is the need for increased accuracy for some rather ordinary materials such as silica. Although the local density approximation (LDA) gives excellent results for individual silica phases, such as the CaCl2 transition [1,2], it is not so good for comparing energetics of very different structures, such as quartz versus stishovite. LDA predicts stishovite to be the stable ground state structure rather than quartz. One of the first great successes of the Generalized Gradient Approximation (GGA) was to give the correct energy difference between quartz and stishovite [3]. Less well appreciated is the fact that almost all other properties, such as structure, equations of state, elastic constants, etc., are worse with the GGA than the LDA. Our QMC results will be used to improve density functionals, and show the way towards more accurate computations for Earth materials. Thermal contributions are included using density functional perturbation theory with the code ABINIT. We have also computed the shear elastic constant c11-c12 in stishovite, which is associated with the phase transition to the CaCl2 structure [1], with QMC. We find excellent agreement with experiments. We find that the main differences between QMC and DFT are crystalline phase dependent energy and pressure shifts. This work is supported by NSF grants EAR-0530282, EAR-0310139, and by DOE contract DE-FG02-99ER45795 to John Wilkins. Computations were performed on Blueice at NCAR under a BTS grant, Tungsten and Abe at NCSA, Franklin at NERSC within the friendly user program, and at the Carnegie
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2016-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE in the 2 dimensional momentum space. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Reconstruction for proton computed tomography by tracing proton trajectories: a Monte Carlo study.
Li, Tianfang; Liang, Zhengrong; Singanallur, Jayalakshmi V; Satogata, Todd J; Williams, David C; Schulte, Reinhard W
2006-03-01
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm(-1)] to the curved CSP and MLP path estimates (5 lp cm(-1)). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.
Kumar, Vaibhaw; Sridhar, Shyam; Errington, Jeffrey R
2011-11-14
We introduce Monte Carlo simulation methods for determining the wetting properties of model systems at geometrically rough interfaces. The techniques described here enable one to calculate the macroscopic contact angle of a droplet that organizes in one of the three wetting states commonly observed for fluids at geometrically rough surfaces: the Cassie, Wenzel, and impregnation states. We adopt an interface potential approach in which the wetting properties of a system are related to the surface density dependence of the surface excess free energy of a thin liquid film in contact with the substrate. We first describe challenges and inefficiencies encountered when implementing a direct version of this approach to compute the properties of fluids at rough surfaces. Next, we detail a series of convenient thermodynamic paths that enable one to obtain free energy information at relevant surface densities over a wide range of temperatures and substrate strengths in an efficient manner. We then show how this information is assembled to construct complete wetting diagrams at a temperature of interest. The strategy pursued within this work is general and is expected to be applicable to a wide range of molecular systems. To demonstrate the utility of the approach, we present results for a Lennard-Jones fluid in contact with a substrate containing rectangular-shaped grooves characterized by feature sizes of order ten fluid diameters. For this particular fluid-substrate combination, we find that the macroscopic theories of Cassie and Wenzel provide a reasonable description of simulation data. © 2011 American Institute of Physics
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W.
2006-03-15
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.
Computed tomography with a low-intensity proton flux: results of a Monte Carlo simulation study
NASA Astrophysics Data System (ADS)
Schulte, Reinhard W.; Klock, Margio C. L.; Bashkirov, Vladimir; Evseev, Ivan G.; de Assis, Joaquim T.; Yevseyeva, Olga; Lopes, Ricardo T.; Li, Tianfang; Williams, David C.; Wroe, Andrew J.; Schelin, Hugo R.
2004-10-01
Conformal proton radiation therapy requires accurate prediction of the Bragg peak position. This problem may be solved by using protons rather than conventional x-rays to determine the relative electron density distribution via proton computed tomography (proton CT). However, proton CT has its own limitations, which need to be carefully studied before this technique can be introduced into routine clinical practice. In this work, we have used analytical relationships as well as the Monte Carlo simulation tool GEANT4 to study the principal resolution limits of proton CT. The GEANT4 simulations were validated by comparing them to predictions of the Bethe Bloch theory and Tschalar's theory of energy loss straggling, and were found to be in good agreement. The relationship between phantom thickness, initial energy, and the relative electron density uncertainty was systematically investigated to estimate the number of protons and dose needed to obtain a given density resolution. The predictions of this study were verified by simulating the performance of a hypothetical proton CT scanner when imaging a cylindrical water phantom with embedded density inhomogeneities. We show that a reasonable density resolution can be achieved with a relatively small number of protons, thus providing a possible dose advantage over x-ray CT.
Physics and computer architecture informed improvements to the Implicit Monte Carlo method
NASA Astrophysics Data System (ADS)
Long, Alex Roberts
The Implicit Monte Carlo (IMC) method has been a standard method for thermal radiative transfer for the past 40 years. In this time, the hydrodynamics methods that are coupled to IMC have evolved and improved, as have the supercomputers used to run large simulations with IMC. Several modern hydrodynamics methods use unstructured non-orthogonal meshes and high-order spatial discretizations. The IMC method has been used primarily with simple Cartesian meshes and always has a first order spatial discretization. Supercomputers are now made up of compute nodes that have a large number of cores. Current IMC parallel methods have significant problems with load imbalance. To utilize many core systems, algorithms must move beyond simple spatial decomposition parallel algorithms. To make IMC better suited for large scale multiphysics simulations in high energy density physics, new spatial discretizations and parallel strategies are needed. Several modifications are made to the IMC method to facilitate running on node-centered, unstructured tetrahedral meshes. These modifications produce results that converge to the expected solution under mesh refinement. A new finite element IMC method is also explored on these meshes, which offer a simulation runtime benefit but does not perform correctly in the diffusion limit. A parallel algorithm that utilizes on-node parallelism and respects memory hierarchies is studied. This method scales almost linearly when using physical cores on a node and benefits from multiple threads per core. A multi-compute node algorithm for domain decomposed IMC that passes mesh data instead of particles is explored as a means to solve load balance issues. This method scales better than the particle passing method on highly scattering problems with short time steps.
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel J.; Bolch, Wesley E.
2012-04-15
Purpose: To establish an organ dose database for pediatric and adolescent reference individuals undergoing computed tomography (CT) examinations by using Monte Carlo simulation. The data will permit rapid estimates of organ and effective doses for patients of different age, gender, examination type, and CT scanner model. Methods: The Monte Carlo simulation model of a Siemens Sensation 16 CT scanner previously published was employed as a base CT scanner model. A set of absorbed doses for 33 organs/tissues normalized to the product of 100 mAs and CTDI{sub vol} (mGy/100 mAs mGy) was established by coupling the CT scanner model with age-dependent reference pediatric hybrid phantoms. A series of single axial scans from the top of head to the feet of the phantoms was performed at a slice thickness of 10 mm, and at tube potentials of 80, 100, and 120 kVp. Using the established CTDI{sub vol}- and 100 mAs-normalized dose matrix, organ doses for different pediatric phantoms undergoing head, chest, abdomen-pelvis, and chest-abdomen-pelvis (CAP) scans with the Siemens Sensation 16 scanner were estimated and analyzed. The results were then compared with the values obtained from three independent published methods: CT-Expo software, organ dose for abdominal CT scan derived empirically from patient abdominal circumference, and effective dose per dose-length product (DLP). Results: Organ and effective doses were calculated and normalized to 100 mAs and CTDI{sub vol} for different CT examinations. At the same technical setting, dose to the organs, which were entirely included in the CT beam coverage, were higher by from 40 to 80% for newborn phantoms compared to those of 15-year phantoms. An increase of tube potential from 80 to 120 kVp resulted in 2.5-2.9-fold greater brain dose for head scans. The results from this study were compared with three different published studies and/or techniques. First, organ doses were compared to those given by CT-Expo which revealed dose
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
Wanek, Johann; Speller, Robert; Rühli, Frank Jakobus
2013-08-01
X-ray imaging is a nondestructive and preferred method in paleopathology to reconstruct the history of ancient diseases. Sophisticated imaging technologies such as computed tomography (CT) have become common for the investigation of skeletal disorders in human remains. Researchers have investigated the impact of ionizing radiation on living cells, but never on ancient cells in dry tissue. The effects of CT exposure on ancient cells have not been examined in the past and may be important for subsequent genetic analysis. To remedy this shortcoming, we developed different Monte Carlo models to simulate X-ray irradiation on ancient cells. Effects of mummification were considered by using two sizes of cells and three different phantom tissues, which enclosed the investigated cell cluster. This cluster was positioned at the isocenter of a CT scanner model, where the cell hit probabilities P(0,1,…, n) were calculated according to the Poisson distribution. To study the impact of the dominant physics process, CT scans for X-ray spectra of 80 and 120 kVp were simulated. Comparison between normal and dry tissue phantoms revealed that the probability of unaffected cells increased by 21 % following cell shrinkage for 80 kVp, while for 120 kVp, a further increase of unaffected cells of 23 % was observed. Consequently, cell shrinkage caused by dehydration decreased the impact of X-ray radiation on mummified cells significantly. Moreover, backscattered electrons in cortical bone protected deeper-lying ancient cells from radiation damage at 80 kVp X-rays.
Monte-Carlo computation of turbulent premixed methane/air ignition
NASA Astrophysics Data System (ADS)
Carmen, Christina Lieselotte
The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.
NASA Astrophysics Data System (ADS)
Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph
2016-03-01
A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.
Using Monte Carlo simulation to compute liquid-vapor saturation properties of ionic liquids.
Rane, Kaustubh S; Errington, Jeffrey R
2013-07-03
We discuss Monte Carlo (MC) simulation methods for calculating liquid-vapor saturation properties of ionic liquids. We first describe how various simulation tools, including reservoir grand canonical MC, growth-expanded ensemble MC, distance-biasing, and aggregation-volume-biasing, are used to address challenges commonly encountered in simulating realistic models of ionic liquids. We then indicate how these techniques are combined with histogram-based schemes for determining saturation properties. Both direct methods, which enable one to locate saturation points at a given temperature, and temperature expanded ensemble methods, which provide a means to trace saturation lines to low temperature, are discussed. We study the liquid-vapor phase behavior of the restricted primitive model (RPM) and a realistic model for 1,3-dimethylimidazolium tetrafluoroborate ([C1mim][BF4]). Results are presented to show the dependence of saturation properties of the RPM and [C1mim][BF4] on the size of the simulation box and the boundary condition used for the Ewald summation. For [C1mim][BF4] we also demonstrate the ability of our strategy to sample ion clusters that form in the vapor phase. Finally, we provide the liquid-vapor saturation properties of these models over a wide range of temperature. Overall, we observe that the choice of system size and boundary condition have a non-negligible effect on the calculated properties, especially at high temperature. Also, we find that the combination of grand canonical MC simulation and isothermal-isobaric temperature expanded ensemble MC simulation provides a computationally efficient means to calculate liquid-vapor saturation properties of ionic liquids.
Graf, Peter A.; Stewart, Gordon; Lackner, Matthew; Dykes, Katherine; Veers, Paul
2016-05-01
Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, which is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Ono, T; Araki, F
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.
2013-01-15
Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT
Zygmanski, Piotr; Liu, Bo; Tsiamas, Panagiotis; Cifter, Fulya; Petersheim, Markus; Hesser, Jürgen; Sajo, Erno
2013-11-21
Recently, interactions of x-rays with gold nanoparticles (GNPs) and the resulting dose enhancement have been studied using several Monte Carlo (MC) codes (Jones et al 2010 Med. Phys. 37 3809-16, Lechtman et al 2011 Phys. Med. Biol. 56 4631-47, McMahon et al 2011 Sci. Rep. 1 1-9, Leung et al 2011 Med. Phys. 38 624-31). These MC simulations were carried out in simplified geometries and provided encouraging preliminary data in support of GNP radiotherapy. As these studies showed, radiation transport computations of clinical beams to obtain dose enhancement from nanoparticles has several challenges, mostly arising from the requirement of high spatial resolution and from the approximations used at the interface between the macroscopic clinical beam transport and the nanoscopic electron transport originating in the nanoparticle or its vicinity. We investigate the impact of MC simulation geometry on the energy deposition due to the presence of GNPs, including the effects of particle clustering and morphology. Dose enhancement due to a single and multiple GNPs using various simulation geometries is computed using GEANT4 MC radiation transport code. Various approximations in the geometry and in the phase space transition from macro- to micro-beams incident on GNPs are analyzed. Simulations using GEANT4 are compared to a deterministic code CEPXS/ONEDANT for microscopic (nm-µm) geometry. Dependence on the following microscopic (µ) geometry parameters is investigated: µ-source-to-GNP distance (µSAD), µ-beam size (µS), and GNP size (µC). Because a micro-beam represents clinical beam properties at the microscopic scale, the effect of using different types of micro-beams is also investigated. In particular, a micro-beam with the phase space of a clinical beam versus a plane-parallel beam with an equivalent photon spectrum is characterized. Furthermore, the spatial anisotropy of energy deposition around a nanoparticle is analyzed. Finally, dependence of dose enhancement
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations.
Toublanc, D
1996-06-20
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations
NASA Astrophysics Data System (ADS)
Toublanc, Dominique
1996-06-01
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
Monte-Carlo scatter correction for cone-beam computed tomography with limited scan field-of-view
NASA Astrophysics Data System (ADS)
Bertram, Matthias; Sattel, Timo; Hohmann, Steffen; Wiegert, Jens
2008-03-01
In flat detector cone-beam computed tomography (CBCT), scattered radiation is a major source of image degradation, making accurate a posteriori scatter correction inevitable. A potential solution to this problem is provided by computerized scatter correction based on Monte-Carlo simulations. Using this technique, the detected distributions of X-ray scatter are estimated for various viewing directions using Monte-Carlo simulations of an intermediate reconstruction. However, as a major drawback, for standard CBCT geometries and with standard size flat detectors such as mounted on interventional C-arms, the scan field of view is too small to accommodate the human body without lateral truncations, and thus this technique cannot be readily applied. In this work, we present a novel method for constructing a model of the object in a laterally and possibly also axially extended field of view, which enables meaningful application of Monte-Carlo based scatter correction even in case of heavy truncations. Evaluation is based on simulations of a clinical CT data set of a human abdomen, which strongly exceeds the field of view of the simulated C-arm based CBCT imaging geometry. By using the proposed methodology, almost complete removal of scatter-caused inhomogeneities is demonstrated in reconstructed images.
Clouvas, A; Xanthos, S; Takoudis, G; Potiriadis, C; Silva, J
2005-02-01
A very limited number of field experiments have been performed to assess the relative radiation detection sensitivities of commercially available equipment used to detect radioactive sources in recycled metal scrap. Such experiments require the cooperation and commitment of considerable resources on the part of vendors of the radiation detection systems and the cooperation of a steel mill or scrap processing facility. The results will unavoidably be specific to the equipment tested at the time, the characteristics of the scrap metal involved in the tests, and to the specific configurations of the scrap containers. Given these limitations, the use of computer simulation for this purpose would be a desirable alternative. With this in mind, this study sought to determine whether Monte Carlo simulation of photon flux energy distributions resulting from a radiation source in metal scrap would be realistic. In the present work, experimental and simulated photon flux energy distributions in the outer part of a truck due to the presence of embedded radioactive sources in the scrap metal load are compared. The experimental photon fluxes are deduced by in situ gamma spectrometry measurements with portable Ge detector and the calculated ones by Monte Carlo simulations with the MCNP code. The good agreement between simulated and measured photon flux energy distributions indicate that the results obtained by the Monte Carlo simulations are realistic.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
NASA Astrophysics Data System (ADS)
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-01
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
2003-02-01
The present work shows how portable Ge detectors can be useful for measurements of the dose rate due to ionizing cosmic radiation. The methodology proposed converts the cosmic radiation induced background in a Ge crystal (energy range above 3 MeV) to the absorbed dose rate due to muons, which are responsible for 75% of the cosmic radiation dose rate at sea level. The key point is to observe in the high energy range (above 20 MeV) the broad muon peak resulting from the most probable energy loss of muons in the Ge detector. An energy shift of the muon peak was observed, as expected, for increasing dimensions of three Ge crystals (10%, 20%, and 70% efficiency). Taking into account the dimensions of the three detectors the location of the three muon peaks was reproduced by Monte Carlo computations using the GEANT code. The absorbed dose rate due to muons has been measured in 50 indoor and outdoor locations at Thessaloniki, the second largest town of Greece, with a portable Ge detector and converted to the absorbed dose rate due to muons in an ICRU sphere representing the human body by using a factor derived from Monte Carlo computations. The outdoor and indoor mean muon dose rate was 25 nGy h(-1) and 17.8 nGy h(-1), respectively. The shielding factor for the 40 indoor measurements ranges from 0.5 to 0.9 with a most probable value between 0.7-0.8.
Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc
2006-08-15
The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation.
Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, René; Beaulieu, Luc
2006-08-01
The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation.
NASA Astrophysics Data System (ADS)
Tsirkunov, Yu. M.; Romanyuk, D. A.
2016-07-01
A dusty gas flow through two, moving and immovable, cascades of airfoils (blades) is studied numerically. In the mathematical model of two-phase gas-particle flow, the carrier gas is treated as a continuum and it is described by the Navier-Stokes equations (pseudo-DNS (direct numerical simulation) approach) or the Reynolds averaged Navier-Stokes (RANS) equations (unsteady RANS approach) with the Menter k-ω shear stress transport (SST) turbulence model. The governing equations in both cases are solved by computational fluid dynamics (CFD) methods. The dispersed phase is treated as a discrete set of solid particles, the behavior of which is described by the generalized kinetic Boltzmann equation. The effects of gas-particle interaction, interparticle collisions, and particle scattering in particle-blade collisions are taken into account. The direct simulation Monte Carlo (DSMC) method is used for computational simulation of the dispersed phase flow. The effects of interparticle collisions and particle scattering are discussed.
Dickens, J.K. )
1989-11-01
A computer experiment'' using Monte Carlo sampling methods has been designed to simulate the breaking up of {sup 12}C by medium-energy neutrons into final reaction channels having 2, 3, or 4 outgoing charged particles. The calculational nuclear physics concept used in the experiment'' is one of a sequentially decaying, highly excited compound nucleus. Two methods of Monte Carlo sampling, the rejection method and the cumulative-distribution method, are discussed as applied to probability functions developed in the program.
NASA Astrophysics Data System (ADS)
Hoggan, Philip E.
2009-03-01
Slater-type orbitals (STO) are rarely used as atomic basis sets for molecular structure and property calculations, since integrals are expensive to evaluate, reliable basis sets are scarce and exact properties such as Kato's cusp condition and the correct exponential decay of the electron density are not significantly better described numerically than with commonly used Gaussian basis sets. We adopt the systematic parallelized development of integration routines for multi-centre integrals, and high-quality basis sets over STOs, useful for modern electron correlation calculations via compact low-variance trial wave-functions for QMC (Quantum Monte Carlo). Molecular QMC applications are also rare, because the method is comparatively complicated to use, however it is extremely precise and can be made to include nearly all the correlation energy. It also scales well for large numbers of processors (1000 s at nearly 100 percent efficiency). Applications need to be carried out on a large scale, to determine electronic structure and properties of large (about 100 atoms) molecules of chemical interest, including intermolecular interactions, best described using Slater trial wave-functions for QMC. Such functions combined as hydrogen-like atomic orbitals possess the correct nodal structure for the high precision FN-MC (Fixed Node Monte Carlo) methods, which include more than 95 percent of the electron correlation energy.
TH-A-19A-10: Fast Four Dimensional Monte Carlo Dose Computations for Proton Therapy of Lung Cancer
Mirkovic, D; Titt, U; Mohan, R; Yepes, P
2014-06-15
Purpose: To develop and validate a fast and accurate four dimensional (4D) Monte Carlo (MC) dose computation system for proton therapy of lung cancer and other thoracic and abdominal malignancies in which the delivered dose distributions can be affected by respiratory motion of the patient. Methods: A 4D computer tomography (CT) scan for a lung cancer patient treated with protons in our clinic was used to create a time dependent patient model using our in-house, MCNPX-based Monte Carlo system (“MC{sup 2}”). The beam line configurations for two passively scattered proton beams used in the actual treatment were extracted from the clinical treatment plan and a set of input files was created automatically using MC{sup 2}. A full MC simulation of the beam line was computed using MCNPX and a set of phase space files for each beam was collected at the distal surface of the range compensator. The particles from these phase space files were transported through the 10 voxelized patient models corresponding to the 10 phases of the breathing cycle in the 4DCT, using MCNPX and an accelerated (fast) MC code called “FDC”, developed by us and which is based on the track repeating algorithm. The accuracy of the fast algorithm was assessed by comparing the two time dependent dose distributions. Results: The error of less than 1% in 100% of the voxels in all phases of the breathing cycle was achieved using this method with a speedup of more than 1000 times. Conclusion: The proposed method, which uses full MC to simulate the beam line and the accelerated MC code FDC for the time consuming particle transport inside the complex, time dependent, geometry of the patient shows excellent accuracy together with an extraordinary speed.
Tolpadi, A.K.; Hu, I.Z.; Correa, S.M.; Burrus, D.L.
1997-07-01
A coupled Lagrangian Monte Carlo Probability Density Function (PDF)-Eulerian Computational Fluid Dynamics (CFD) technique is presented for calculating steady three-dimensional turbulent reacting flow in a gas turbine combustor. PDF transport methods model turbulence-combustion interactions more accurately than conventional turbulence models with an assumed shape PDF. The PDF transport equation was solved using a Lagrangian particle tracking Monte Carlo (MC) method. The PDF modeled was over composition only. This MC module has been coupled with CONCERT, which is a fully elliptic three-dimensional body-fitted CFD code based on pressure correction techniques. In an earlier paper, this computational approach was described, but only fast chemistry calculations were presented in a typical aircraft engine combustor. In the present paper, reduced chemistry schemes were incorporated into the MC module that enabled the modeling of finite rate effects in gas turbine flames and therefore the prediction of CO and NO{sub x} emissions. With the inclusion of these finite rate effects, the gas temperatures obtained were also more realistic. Initially, a two scalar scheme was implemented that allowed validation against Raman data taken in a recirculation bluff body stabilized CO/H{sub 2}/N{sub 2}-air flame. Good agreement of the temperature and major species were obtained. Next, finite rate computations were performed in a single annular aircraft engine combustor by incorporating a simple three scalar reduced chemistry scheme for Jet A fuel. This three scalar scheme was an extension of the two scalar scheme for CO/H{sub 2}/N{sub 2} fuel. The solutions obtained using the present approach were compared with those obtained using the fast chemistry PDF transport approach as well as the presumed shape PDF method. The calculated exhaust gas temperature using the finite rate model showed the best agreement with measurements made by a thermocouple rake.
Method for Fast CT/SPECT-Based 3D Monte Carlo Absorbed Dose Computations in Internal Emitter Therapy
NASA Astrophysics Data System (ADS)
Wilderman, S. J.; Dewaraja, Y. K.
2007-02-01
The DPM (Dose Planning Method) Monte Carlo electron and photon transport program, designed for fast computation of radiation absorbed dose in external beam radiotherapy, has been adapted to the calculation of absorbed dose in patient-specific internal emitter therapy. Because both its photon and electron transport mechanics algorithms have been optimized for fast computation in 3D voxelized geometries (in particular, those derived from CT scans), DPM is perfectly suited for performing patient-specific absorbed dose calculations in internal emitter therapy. In the updated version of DPM developed for the current work, the necessary inputs are a patient CT image, a registered SPECT image, and any number of registered masks defining regions of interest. DPM has been benchmarked for internal emitter therapy applications by comparing computed absorption fractions for a variety of organs using a Zubal phantom with reference results from the Medical Internal Radionuclide Dose (MIRD) Committee standards. In addition, the beta decay source algorithm and the photon tracking algorithm of DPM have been further benchmarked by comparison to experimental data. This paper presents a description of the program, the results of the benchmark studies, and some sample computations using patient data from radioimmunotherapy studies using 131I
Zhang, Xiaofeng; Badea, Cristian; Hood, Greg; Wetzel, Arthur; Qi, Yi; Stiles, Joel; Johnson, G. Allan
2011-01-01
We present a method for high-resolution reconstruction of fluorescent images of the mouse thorax. It features an anatomically guided sampling method to retrospectively eliminate problematic data and a parallel Monte Carlo software package to compute the Jacobian matrix for the inverse problem. The proposed method was capable of resolving microliter-sized femtomole amount of quantum dot inclusions closely located in the middle of the mouse thorax. The reconstruction was verified against co-registered micro-CT data. Using the proposed method, the new system achieved significantly higher resolution and sensitivity compared to our previous system consisting of the same hardware. This method can be applied to any system utilizing similar imaging principles to improve imaging performance. PMID:21991539
Sechopoulos, Ioannis; Vedantham, Srinivasan; Suryanarayanan, Sankararaman; D’Orsi, Carl J.; Karellas, Andrew
2008-01-01
Purpose To prospectively determine the radiation dose absorbed by the organs and tissues of the body during a dedicated computed tomography of the breast (DBCT) study using Monte Carlo methods and a phantom. Materials and Methods Using the Geant4 Monte Carlo toolkit, the Cristy anthropomorphic phantom and the geometry of a prototype DBCT was simulated. The simulation was used to track x-rays emitted from the source until their complete absorption or exit from the simulation limits. The interactions of the x-rays with the 65 different volumes representing organs, bones and other tissues of the anthropomorphic phantom that resulted in energy deposition were recorded. These data were used to compute the radiation dose to the organs and tissues during a complete DBCT acquisition relative to the average glandular dose to the imaged breast (ROD, relative organ dose), using the x-ray spectra proposed for DBCT imaging. The effectiveness of a lead shield for reducing the dose to the organs was investigated. Results The maximum ROD among the organs was for the ipsilateral lung with a maximum of 3.25%, followed by the heart and the thymus. Of the skeletal tissues, the sternum received the highest dose with a maximum ROD to the bone marrow of 2.24%, and to the bone surface of 7.74%. The maximum ROD to the uterus, representative of that of an early-stage fetus, was 0.026%. These maxima occurred for the highest energy x-ray spectrum (80 kVp) analyzed. A lead shield does not protect substantially the organs that receive the highest dose from DBCT. Discussion Although the dose to the organs from DBCT is substantially higher than that from planar mammography, they are comparable or considerably lower than those reached by other radiographic procedures and much lower than other CT examinations. PMID:18292479
Advanced computational methods for nodal diffusion, Monte Carlo, and S{sub n} problems. Final Report
1994-12-31
The work addresses basic computational difficulties that arise in the numerical simulation of neutral particle radiation transport: discretized radiation transport problems, iterative methods, selection of parameters, and extension of current algorithms.
NASA Astrophysics Data System (ADS)
Schwarz, Ingmar; Fortini, Andrea; Wagner, Claudia Simone; Wittemann, Alexander; Schmidt, Matthias
2011-12-01
We consider a theoretical model for a binary mixture of colloidal particles and spherical emulsion droplets. The hard sphere colloids interact via additional short-ranged attraction and long-ranged repulsion. The droplet-colloid interaction is an attractive well at the droplet surface, which induces the Pickering effect. The droplet-droplet interaction is a hard-core interaction. The droplets shrink in time, which models the evaporation of the dispersed (oil) phase, and we use Monte Carlo simulations for the dynamics. In the experiments, polystyrene particles were assembled using toluene droplets as templates. The arrangement of the particles on the surface of the droplets was analyzed with cryogenic field emission scanning electron microscopy. Before evaporation of the oil, the particle distribution on the droplet surface was found to be disordered in experiments, and the simulations reproduce this effect. After complete evaporation, ordered colloidal clusters are formed that are stable against thermal fluctuations. Both in the simulations and with field emission scanning electron microscopy, we find stable packings that range from doublets, triplets, and tetrahedra to complex polyhedra of colloids. The simulated cluster structures and size distribution agree well with the experimental results. We also simulate hierarchical assembly in a mixture of tetrahedral clusters and droplets, and find supercluster structures with morphologies that are more complex than those of clusters of single particles.
NASA Astrophysics Data System (ADS)
Demidov, A.; Eschlböck-Fuchs, S.; Kazakov, A. Ya.; Gornushkin, I. B.; Kolmhofer, P. J.; Pedarnig, J. D.; Huber, N.; Heitz, J.; Schmid, T.; Rössler, R.; Panne, U.
2016-11-01
The improved Monte-Carlo (MC) method for standard-less analysis in laser induced breakdown spectroscopy (LIBS) is presented. Concentrations in MC LIBS are found by fitting model-generated synthetic spectra to experimental spectra. The current version of MC LIBS is based on the graphic processing unit (GPU) computation and reduces the analysis time down to several seconds per spectrum/sample. The previous version of MC LIBS which was based on the central processing unit (CPU) computation requested unacceptably long analysis times of 10's minutes per spectrum/sample. The reduction of the computational time is achieved through the massively parallel computing on the GPU which embeds thousands of co-processors. It is shown that the number of iterations on the GPU exceeds that on the CPU by a factor > 1000 for the 5-dimentional parameter space and yet requires > 10-fold shorter computational time. The improved GPU-MC LIBS outperforms the CPU-MS LIBS in terms of accuracy, precision, and analysis time. The performance is tested on LIBS-spectra obtained from pelletized powders of metal oxides consisting of CaO, Fe2O3, MgO, and TiO2 that simulated by-products of steel industry, steel slags. It is demonstrated that GPU-based MC LIBS is capable of rapid multi-element analysis with relative error between 1 and 10's percent that is sufficient for industrial applications (e.g. steel slag analysis). The results of the improved GPU-based MC LIBS are positively compared to that of the CPU-based MC LIBS as well as to the results of the standard calibration-free (CF) LIBS based on the Boltzmann plot method.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.
Miller, John H.; Wilson, W E.; Lynch, D J.; Resat, Marianne S.; Trease, Harold E.
2001-10-15
Both in vitro and in vivo experiments show that cells that do not receive energy directly from the radiation field (bystanders) respond to radiation exposure. This effect is most easily demonstrated with radiation fields composed of particles with high linear energy transfer (LET) that traverse only a few cells before they are stopped. Even at a moderate fluence of high-LET radiation only a small fraction of cells in the irradiated population are hit; hence, many bystanders are present. Low-LET radiation tends to generate a homogeneous distribution of dose at the cellular level so that identifying bystanders is much more difficult than in experiments with the same fluence of high-LET radiation. Experiments are underway at several laboratories to characterize bystander responses induced by low-LET radiation. At the Pacific Northwest National Laboratory, experiments of this type are being carried out with an electron microbeam. A cell selected to receive energy directly from the irradiation source is placed over a hole in a mask that covers an electron gun. Monte Carlo simulations by Miller et al.(1) suggest that individual mammalian cells in a confluent monolayer could be targeted for irradiation by 25 to 100 keV electrons with minimal dose leakage to their neighbors. These calculations were based on a simple model of the cellular monolayer in which cells were assumed to be cylindrically symmetric with concentric cytoplasm and nucleus. Radial profiles, the lateral extent of cytoplasm and nucleus as a function of depth into a cell, were obtained from confocal microscopy of HeLa-cell monolayers.
ERIC Educational Resources Information Center
Kalkanis, G.; Sarris, M. M.
1999-01-01
Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…
ERIC Educational Resources Information Center
Kalkanis, G.; Sarris, M. M.
1999-01-01
Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
NASA Astrophysics Data System (ADS)
Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel
2017-03-01
The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.
NASA Astrophysics Data System (ADS)
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
Monte Carlo Computation of the Finite-Size Scaling Function: an Alternative Approach
NASA Astrophysics Data System (ADS)
Kim, Jae-Kwon; de Souza, Adauto J. F.; Landau, D. P.
1996-03-01
We show how to compute numerically a finite-size-scaling function which is particularly effective in extracting accurate infinite- volume -limit values (bulk values) of certain physical quantities^1. We illustrate our procedure for the two and three dimensional Ising models, and report our bulk values for the correlation lenth, magnetic susceptibility, and renormalized four-point coupling constant. Based on these bulk values we extract the values of various critical parameters. ^1 J.-K. Kim, Euro. Phys. Lett. 28, 211 (1994) Research supported in part by the NSF ^Permanent address: Departmento de Fisica e Matematica, Universidade Federal Rural de Pernambuco, 52171-900, Recife, Pernambuco, Brazil
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Wang, Z; Gao, M
2014-06-01
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.
NASA Astrophysics Data System (ADS)
Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc
2004-09-01
High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and
Reconfigurable computing for Monte Carlo simulations: Results and prospects of the Janus project
NASA Astrophysics Data System (ADS)
Baity-Jesi, M.; Baños, R. A.; Cruz, A.; Fernandez, L. A.; Gil-Narvion, J. M.; Gordillo-Guerrero, A.; Guidetti, M.; Iñiguez, D.; Maiorano, A.; Mantovani, F.; Marinari, E.; Martin-Mayor, V.; Monforte-Garcia, J.; Muñoz Sudupe, A.; Navarro, D.; Parisi, G.; Pivanti, M.; Perez-Gaviro, S.; Ricci-Tersenghi, F.; Ruiz-Lorenzo, J. J.; Schifano, S. F.; Seoane, B.; Tarancon, A.; Tellez, P.; Tripiccione, R.; Yllanes, D.
2012-08-01
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non-equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin-glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Approximate Bayesian Computation using Markov Chain Monte Carlo simulation: DREAM(ABC)
NASA Astrophysics Data System (ADS)
Sadegh, Mojtaba; Vrugt, Jasper A.
2014-08-01
The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh (2013) to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for a residual-based likelihood function in favor of one or multiple different summary statistics that exhibit superior diagnostic power. Here we propose several methodological improvements over commonly used ABC sampling methods to permit inference of complex system models. Our methodology entitled DREAM(ABC) uses the DiffeRential Evolution Adaptive Metropolis algorithm as its main building block and takes advantage of a continuous fitness function to efficiently explore the behavioral model space. Three case studies demonstrate that DREAM(ABC) is at least an order of magnitude more efficient than commonly used ABC sampling methods for more complex models. DREAM(ABC) is also more amenable to distributed, multi-processor, implementation, a prerequisite to diagnostic inference of CPU-intensive system models.
Baird, Stuart J E; Santos, Filipe
2010-09-01
Approximate Bayesian computation (ABC) substitutes simulation for analytic models in Bayesian inference. Simulating evolutionary scenarios under Kimura's stepping stone model (KSS) might therefore allow inference over spatial genetic process where analytical results are difficult to obtain. ABC first creates a reference set of simulations and would proceed by comparing summary statistics over KSS simulations to summary statistics from localities sampled in the field, but: comparison of which localities and stepping stones? Identical stepping stones can be arranged so two localities fall in the same stepping stone, nearest or diagonal neighbours, or without contact. None is intrinsically correct, yet some choice must be made and this affects inference. We explore a Bayesian strategy for mapping field observations onto discrete stepping stones. We make Sundial, for projecting field data onto the plane, available. We generalize KSS over regular tilings of the plane. We show Bayesian averaging over the mapping between a continuous field area and discrete stepping stones improves the fit between KSS and isolation by distance expectations. We make Tiler Durden available for carrying out this Bayesian averaging. We describe a novel parameterization of KSS based on Wright's neighbourhood size, placing an upper bound on the geographic area represented by a stepping stone and make it available as m Vector. We generalize spatial coalescence recursions to continuous and discrete space cases and use these to numerically solve for KSS coalescence previously examined only using simulation. We thus provide applied and analytical resources for comparison of stepping stone simulations with field observations.
Driver, K P; Cohen, R E; Wu, Zhigang; Militzer, B; Ríos, P López; Towler, M D; Needs, R J; Wilkins, J W
2010-05-25
Silica (SiO(2)) is an abundant component of the Earth whose crystalline polymorphs play key roles in its structure and dynamics. First principle density functional theory (DFT) methods have often been used to accurately predict properties of silicates, but fundamental failures occur. Such failures occur even in silica, the simplest silicate, and understanding pure silica is a prerequisite to understanding the rocky part of the Earth. Here, we study silica with quantum Monte Carlo (QMC), which until now was not computationally possible for such complex materials, and find that QMC overcomes the failures of DFT. QMC is a benchmark method that does not rely on density functionals but rather explicitly treats the electrons and their interactions via a stochastic solution of Schrödinger's equation. Using ground-state QMC plus phonons within the quasiharmonic approximation of density functional perturbation theory, we obtain the thermal pressure and equations of state of silica phases up to Earth's core-mantle boundary. Our results provide the best constrained equations of state and phase boundaries available for silica. QMC indicates a transition to the dense alpha-PbO(2) structure above the core-insulating D" layer, but the absence of a seismic signature suggests the transition does not contribute significantly to global seismic discontinuities in the lower mantle. However, the transition could still provide seismic signals from deeply subducted oceanic crust. We also find an accurate shear elastic constant for stishovite and its geophysically important softening with pressure.
Driver, K. P.; Cohen, R. E.; Wu, Zhigang; Militzer, B.; Ríos, P. López; Towler, M. D.; Needs, R. J.; Wilkins, J. W.
2010-01-01
Silica (SiO2) is an abundant component of the Earth whose crystalline polymorphs play key roles in its structure and dynamics. First principle density functional theory (DFT) methods have often been used to accurately predict properties of silicates, but fundamental failures occur. Such failures occur even in silica, the simplest silicate, and understanding pure silica is a prerequisite to understanding the rocky part of the Earth. Here, we study silica with quantum Monte Carlo (QMC), which until now was not computationally possible for such complex materials, and find that QMC overcomes the failures of DFT. QMC is a benchmark method that does not rely on density functionals but rather explicitly treats the electrons and their interactions via a stochastic solution of Schrödinger’s equation. Using ground-state QMC plus phonons within the quasiharmonic approximation of density functional perturbation theory, we obtain the thermal pressure and equations of state of silica phases up to Earth’s core–mantle boundary. Our results provide the best constrained equations of state and phase boundaries available for silica. QMC indicates a transition to the dense α-PbO2 structure above the core-insulating D” layer, but the absence of a seismic signature suggests the transition does not contribute significantly to global seismic discontinuities in the lower mantle. However, the transition could still provide seismic signals from deeply subducted oceanic crust. We also find an accurate shear elastic constant for stishovite and its geophysically important softening with pressure. PMID:20457932
Platten, David John
2014-06-01
Existing data used to calculate the barrier transmission of scattered radiation from computed tomography (CT) are based on primary beam CT energy spectra. This study uses the EGSnrc Monte Carlo system and Epp user code to determine the energy spectra of CT scatter from four different primary CT beams passing through an ICRP 110 male reference phantom. Each scatter spectrum was used as a broad-beam x-ray source in transmission simulations through seventeen thicknesses of lead (0.00-3.50 mm). A fit of transmission data to lead thickness was performed to obtain α, β and γ parameters for each spectrum. The mean energy of the scatter spectra were up to 12.3 keV lower than that of the primary spectrum. For 120 kVp scatter beams the transmission through lead was at least 50% less than predicted by existing data for thicknesses of 1.5 mm and greater; at least 30% less transmission was seen for 140 kVp scatter beams. This work has shown that the mean energy and half-value layer of CT scatter spectra are lower than those of the corresponding primary beam. The transmission of CT scatter radiation through lead is lower than that calculated with currently available data. Using the data from this work will result in less lead shielding being required for CT scanner installations.
SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine
Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L
2014-06-01
Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.
NASA Astrophysics Data System (ADS)
Sampson, Andrew Joseph
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference, DeltaD, between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV's, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes DeltaD using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR DeltaD is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV's in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a
Son, Kihong; Cho, Seungryong; Kim, Jin Sung; Han, Youngyih; Ju, Sang Gyu; Choi, Doo Ho
2014-03-06
Image-guided techniques for radiation therapy have improved the precision of radiation delivery by sparing normal tissues. Cone-beam computed tomography (CBCT) has emerged as a key technique for patient positioning and target localization in radiotherapy. Here, we investigated the imaging radiation dose delivered to radiosensitive organs of a patient during CBCT scan. The 4D extended cardiac-torso (XCAT) phantom and Geant4 Application for Tomographic Emission (GATE) Monte Carlo (MC) simulation tool were used for the study. A computed tomography dose index (CTDI) standard polymethyl methacrylate (PMMA) phantom was used to validate the MC-based dosimetric evaluation. We implemented an MC model of a clinical on-board imager integrated with the Trilogy accelerator. The MC model's accuracy was validated by comparing its weighted CTDI (CTDIw) values with those of previous studies, which revealed good agreement. We calculated the absorbed doses of various human organs at different treatment sites such as the head-and-neck, chest, abdomen, and pelvis regions, in both standard CBCT scan mode (125 kVp, 80 mA, and 25 ms) and low-dose scan mode (125 kVp, 40 mA, and 10 ms). In the former mode, the average absorbed doses of the organs in the head and neck and chest regions ranged 4.09-8.28 cGy, whereas those of the organs in the abdomen and pelvis regions were 4.30-7.48 cGy. In the latter mode, the absorbed doses of the organs in the head and neck and chest regions ranged 1.61-1.89 cGy, whereas those of the organs in the abdomen and pelvis region ranged between 0.79-1.85 cGy. The reduction in the radiation dose in the low-dose mode compared to the standard mode was about 20%, which is in good agreement with previous reports. We opine that the findings of this study would significantly facilitate decisions regarding the administration of extra imaging doses to radiosensitive organs.
Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2015-01-01
Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553
NASA Technical Reports Server (NTRS)
Horton, B. E.; Bowhill, S. A.
1971-01-01
This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Groh, Kim De; Kneubel, Christian A.
2014-01-01
A space experiment flown as part of the Materials International Space Station Experiment 6B (MISSE 6B) was designed to compare the atomic oxygen erosion yield (Ey) of layers of Kapton H polyimide with no spacers between layers with that of layers of Kapton H with spacers between layers. The results were compared to a solid Kapton H (DuPont, Wilmington, DE) sample. Monte Carlo computational modeling was performed to optimize atomic oxygen interaction parameter values to match the results of both the MISSE 6B multilayer experiment and the undercut erosion profile from a crack defect in an aluminized Kapton H sample flown on the Long Duration Exposure Facility (LDEF). The Monte Carlo modeling produced credible agreement with space results of increased Ey for all samples with spacers as well as predicting the space-observed enhancement in erosion near the edges of samples due to scattering from the beveled edges of the sample holders.
Chow, James C. L.; Leung, Michael K. K.; Islam, Mohammad K.; Norrlinger, Bernhard D.; Jaffray, David A.
2008-01-15
The aim of this study is to evaluate the impact of the patient dose due to the kilovoltage cone beam computed tomography (kV-CBCT) in a prostate intensity-modulated radiation therapy (IMRT). The dose distributions for the five prostate IMRTs were calculated using the Pinnacle3 treatment planning system. To calculate the patient dose from CBCT, phase-space beams of a CBCT head based on the ELEKTA x-ray volume imaging system were generated using the Monte Carlo BEAMnrc code for 100, 120, 130, and 140 kVp energies. An in-house graphical user interface called DOSCTP (DOSXYZnrc-based) developed using MATLAB was used to calculate the dose distributions due to a 360 deg. photon arc from the CBCT beam with the same patient CT image sets as used in Pinnacle3. The two calculated dose distributions were added together by setting the CBCT doses equal to 1%, 1.5%, 2%, and 2.5% of the prescription dose of the prostate IMRT. The prostate plan and the summed dose distributions were then processed in the CERR platform to determine the dose-volume histograms (DVHs) of the regions of interest. Moreover, dose profiles along the x- and y-axes crossing the isocenter with and without addition of the CBCT dose were determined. It was found that the added doses due to CBCT are most significant at the femur heads. Higher doses were found at the bones for a relatively low energy CBCT beam such as 100 kVp. Apart from the bones, the CBCT dose was observed to be most concentrated on the anterior and posterior side of the patient anatomy. Analysis of the DVHs for the prostate and other critical tissues showed that they vary only slightly with the added CBCT dose at different beam energies. On the other hand, the changes of the DVHs for the femur heads due to the CBCT dose and beam energy were more significant than those of rectal and bladder wall. By analyzing the vertical and horizontal dose profiles crossing the femur heads and isocenter, with and without the CBCT dose equal to 2% of the
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Capogni, M; Lo Meo, S; Fazio, A
2010-01-01
Two CERN Monte Carlo codes, i.e. GEANT3.21 and GEANT4, were compared. The specific routine (sch2for), implemented in GEANT3.21 to simulate a disintegration process, and the G4RadioactiveDecay class, provided by GEANT4, were used for the computation of the full-energy-peak and total efficiencies of several radionuclides. No reference to experimental data was involved. A level of agreement better than 1% for the total efficiency and a deviation lower than 3.5% for the full-energy-peak efficiencies were found. Copyright 2009 Elsevier Ltd. All rights reserved.
Shedlock, Daniel; Haghighat, Alireza
2005-01-01
In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF
Quantum Monte Carlo Computations of the (Mg1-XFeX) SiO3 Perovskite to Post-perovskite Phase Boundary
NASA Astrophysics Data System (ADS)
Lin, Yangzheng; Cohen, R. E.; Floris, Andrea; Shulenburger, Luke; Driver, Kevin P.
We have computed total energies of FeSiO3 and MgSiO3[1 ] perovskite and post-perovskite using diffusion Monte Carlo with the qmcpack GPU code. In conjunction with DFT +U computations for intermediate compositions (Mg1-XFeX) SiO3 and phonons computed using density functional perturbation theory (DFPT) with the pwscf code, we have derived the chemical potentials of perovskite (Pv) and post-perovskite (PPv) (Mg1-XFeX) SiO3 and computed the binary phase diagram versus P, T, and X using a non-ideal solid solution model. The finite temperature effects were considered within quasi-harmonic approximation (QHA). Our results show that ferrous iron stabilizes PPv and lowers the Pv-PPv transition pressure, which is consistent with previous theoretical and some experimental studies. We will discuss the correlation between the Earth's D'' layer and the Pv to PPv phase boundary. Computations were performed on XSEDE machines, and on the Oak Ridge Leadership Computing Facility (OLCF) machine Titan under project CPH103geo of INCITE program E-mail: rcohen@carnegiescience.edu; This work is supported by NSF.
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Vasquez, Victor R; Whiting, Wallace B
2005-12-01
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Sima, Octavian; Lépy, Marie-Christine
2016-03-01
The uncertainty of quantities relevant in gamma-ray spectrometry (efficiency, transfer factor, self-attenuation FA and coincidence summing FC correction factors) is realistically evaluated by Monte Carlo propagation of the distributions characterizing the parameters on which these quantities depend. Probability density functions are constructed and summarized as recommended in the GUM Supplement 1 and compared with the values obtained using the traditional approach (GUM uncertainty framework). Special cases when this approach encounters difficulties (FC uncertainty due to the uncertainty of decay scheme parameters, effect of activity and matrix inhomogeneity on efficiency) are also discussed.
NASA Astrophysics Data System (ADS)
Brown, David F. R.; Gibbs, Mark N.; Clary, David C.
1996-11-01
We describe a new method to calculate the vibrational ground state properties of weakly bound molecular systems and apply it to (HF)2 and HF-HCl. A Bayesian Inference neural network is used to fit an analytic function to a set of ab initio data points, which may then be employed by the quantum diffusion Monte Carlo method to produce ground state vibrational wave functions and properties. The method is general and relatively simple to implement and will be attractive for calculations on systems for which no analytic potential energy surface exists.
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
Vieira, L; Vaz, T F; Costa, D C; Almeida, P
2014-01-01
To describe and validate the simulation of the basic features of GE Millennium MG gamma camera using the GATE Monte Carlo platform. Crystal size and thickness, parallel-hole collimation and a realistic energy acquisition window were simulated in the GATE platform. GATE results were compared to experimental data in the following imaging conditions: a point source of (99m)Tc at different positions during static imaging and tomographic acquisitions using two different energy windows. The accuracy between the events expected and detected by simulation was obtained with the Mann-Whitney-Wilcoxon test. Comparisons were made regarding the measurement of sensitivity and spatial resolution, static and tomographic. Simulated and experimental spatial resolutions for tomographic data were compared with the Kruskal-Wallis test to assess simulation accuracy for this parameter. There was good agreement between simulated and experimental data. The number of decays expected when compared with the number of decays registered, showed small deviation (≤ 0.007%). The sensitivity comparisons between static acquisitions for different distances from source to collimator (1, 5, 10, 20, 30 cm) with energy windows of 126-154 keV and 130-158 keV showed differences of 4.4%, 5.5%, 4.2%, 5.5%, 4.5% and 5.4%, 6.3%, 6.3%, 5.8%, 5.3%, respectively. For the tomographic acquisitions, the mean differences were 7.5% and 9.8% for the energy window 126-154 keV and 130-158 keV. Comparison of simulated and experimental spatial resolutions for tomographic data showed no statistically significant differences with 95% confidence interval. Adequate simulation of the system basic features using GATE Monte Carlo simulation platform was achieved and validated. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as
NASA Astrophysics Data System (ADS)
Okazaki, Susumu; Touhara, Hidekazu; Nakanishi, Koichiro
1984-07-01
Monte Carlo calculation has been carried out for 5 mol % aqueous solution of methanol at 298.15 K and experimental density with the Metropolis scheme in NTV ensemble. The total number of molecules is 216, of which 11 are methanol. The three kinds of pair potential function used are all based on SCF MO calculations, namely, water-water interactions with MCY (Matsuoka-Clementi-Yoshimine) potential, water-methanol, and methanol-methanol interactions with those proposed by Okazaki et al. and Jorgensen. Totally 5 800 000 configurations have been generated and final 4 200 000 configurations have been used to the calculation of thermodynamic quantities and various distribution functions. It is found that the mixing is slightly exothermic which can be ascribed to the promotion of water structure around methanol rather than the formation of hydrogen bonding between water and methanol. Evidence is given for the existence of hydrophobic interaction effect and the self-association of methanol with or without one water layer in between.
Baum, Karl G; Helguera, María
2007-11-01
SimSET is a package for simulation of emission tomography data sets. Condor is a popular distributed computing environment. Simple C/C++ applications and shell scripts are presented which allow the execution of SimSET on the Condor environment. This is accomplished without any modification to SimSET by executing multiple instances and using its combinebin utility. This enables research facilities without dedicated parallel computing systems to utilize the idle cycles of desktop workstations to greatly reduce the run times of their SimSET simulations. The necessary steps to implement this approach in other environments are presented along with sample results.
Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain
NASA Astrophysics Data System (ADS)
Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.
2017-06-01
In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.
Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0
Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.
1996-10-01
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.
Gliksman, N R; Skibbens, R V; Salmon, E D
1993-01-01
Microtubules (MTs) in newt mitotic spindles grow faster than MTs in the interphase cytoplasmic microtubule complex (CMTC), yet spindle MTs do not have the long lengths or lifetimes of the CMTC microtubules. Because MTs undergo dynamic instability, it is likely that changes in the durations of growth or shortening are responsible for this anomaly. We have used a Monte Carlo computer simulation to examine how changes in the number of MTs and changes in the catastrophe and rescue frequencies of dynamic instability may be responsible for the cell cycle dependent changes in MT characteristics. We used the computer simulations to model interphase-like or mitotic-like MT populations on the basis of the dynamic instability parameters available from newt lung epithelial cells in vivo. We started with parameters that produced MT populations similar to the interphase newt lung cell CMTC. In the simulation, increasing the number of MTs and either increasing the frequency of catastrophe or decreasing the frequency of rescue reproduced the changes in MT dynamics measured in vivo between interphase and mitosis. Images PMID:8298190
NASA Astrophysics Data System (ADS)
Meyer, Sebastian; Gianoli, Chiara; Magallanes, Lorena; Kopp, Benedikt; Tessonnier, Thomas; Landry, Guillaume; Dedes, George; Voss, Bernd; Parodi, Katia
2017-02-01
Ion beam therapy offers the possibility of a highly conformal tumor-dose distribution; however, this technique is extremely sensitive to inaccuracies in the treatment procedures. Ambiguities in the conversion of Hounsfield units of the treatment planning x-ray CT to relative stopping power (RSP) can cause uncertainties in the estimated ion range of up to several millimeters. Ion CT (iCT) represents a favorable solution allowing to directly assess the RSP. In this simulation study we investigate the performance of the integration-mode configuration for carbon iCT, in comparison with a single-particle approach under the same set-up. The experimental detector consists of a stack of 61 air-filled parallel-plate ionization chambers, interleaved with 3 mm thick PMMA absorbers. By means of Monte Carlo simulations, this design was applied to acquire iCTs of phantoms of tissue-equivalent materials. An optimization of the acquisition parameters was performed to reduce the dose exposure, and the implications of a reduced absorber thickness were assessed. In order to overcome limitations of integration-mode detection in the presence of lateral tissue heterogeneities a dedicated post-processing method using a linear decomposition of the detector signal was developed and its performance was compared to the list-mode acquisition. For the current set-up, the phantom dose could be reduced to below 30 mGy with only minor image quality degradation. By using the decomposition method a correct identification of the components and a RSP accuracy improvement of around 2.0% was obtained. The comparison of integration- and list-mode indicated a slightly better image quality of the latter, with an average median RSP error below 1.8% and 1.0%, respectively. With a decreased absorber thickness a reduced RSP error was observed. Overall, these findings support the potential of iCT for low dose RSP estimation, showing that integration-mode detectors with dedicated post-processing strategies
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the
Meyer, Sebastian; Gianoli, Chiara; Magallanes, Lorena; Kopp, Benedikt; Tessonnier, Thomas; Landry, Guillaume; Dedes, George; Voss, Bernd; Parodi, Katia
2017-02-07
Ion beam therapy offers the possibility of a highly conformal tumor-dose distribution; however, this technique is extremely sensitive to inaccuracies in the treatment procedures. Ambiguities in the conversion of Hounsfield units of the treatment planning x-ray CT to relative stopping power (RSP) can cause uncertainties in the estimated ion range of up to several millimeters. Ion CT (iCT) represents a favorable solution allowing to directly assess the RSP. In this simulation study we investigate the performance of the integration-mode configuration for carbon iCT, in comparison with a single-particle approach under the same set-up. The experimental detector consists of a stack of 61 air-filled parallel-plate ionization chambers, interleaved with 3 mm thick PMMA absorbers. By means of Monte Carlo simulations, this design was applied to acquire iCTs of phantoms of tissue-equivalent materials. An optimization of the acquisition parameters was performed to reduce the dose exposure, and the implications of a reduced absorber thickness were assessed. In order to overcome limitations of integration-mode detection in the presence of lateral tissue heterogeneities a dedicated post-processing method using a linear decomposition of the detector signal was developed and its performance was compared to the list-mode acquisition. For the current set-up, the phantom dose could be reduced to below 30 mGy with only minor image quality degradation. By using the decomposition method a correct identification of the components and a RSP accuracy improvement of around 2.0% was obtained. The comparison of integration- and list-mode indicated a slightly better image quality of the latter, with an average median RSP error below 1.8% and 1.0%, respectively. With a decreased absorber thickness a reduced RSP error was observed. Overall, these findings support the potential of iCT for low dose RSP estimation, showing that integration-mode detectors with dedicated post-processing strategies
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Burns, T.J.
1994-03-01
An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Papadimitroulas, P; Kagadis, GC; Loudos, G
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
NASA Astrophysics Data System (ADS)
Chugunov, Svyatoslav; Li, Changying
2015-09-01
Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.
NASA Astrophysics Data System (ADS)
Bednarz, Bryan; Hancox, Cindy; Xu, X. George
2009-09-01
There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU
NASA Astrophysics Data System (ADS)
Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael
2013-05-01
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Brolin, Gustav; Gleisner, Katarina Sjögreen; Ljungberg, Michael
2013-05-21
In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for (99m)Tc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.
Guo, Xin; Minakata, Daisuke; Crittenden, John
2014-09-16
We have developed a computer-based first-principles kinetic Monte Carlo (CF-KMC) model to predict degradation mechanisms and fates of intermediates and byproducts produced from the degradation of polyethylene glycol (PEG) in the presence of hydrogen peroxide (UV/H2O2). The CF-KMC model is composed of a reaction pathway generator, a reaction rate constant estimator, and a KMC solver. The KMC solver is able to solve the predicted pathways successfully without solving ordinary differential equations. The predicted time-dependent profiles of averaged molecular weight, and polydispersitivity index (i.e., the ratio of the weight-averaged molecular weight to the number-averaged molecular weight) for the PEG degradation were validated with experimental observations. These predictions are consistent with the experimental data. The model provided detailed and quantitative insights into the time evolutions of molecular weight distribution and concentration profiles of low molecular weight products and functional groups. Our approach may be useful to predict the fates of degradation products for a wide range of complicated organic contaminants.
Watson, P; Mainegra-Hing, E; Soisson, E; Naqa, I El; Seuntjens, J
2012-07-01
A fast and accurate MC-based scatter correction algorithm was implemented on real cone-beam computed tomography (CBCT) data. An ACR CT accreditation phantom was imaged on a Varian OBI CBCT scanner using the standard-dose head protocol (100 kVp, 151 mAs, partial-angle). A fast Monte Carlo simulation developed in the EGSnrc framework was used to transport photons through the uncorrected CBCT scan. From the simulation output, the contribution from both primary and scattered photons for each projection image was estimated. Using these estimates, a subtractive scatter correction was performed on the CBCT projection data. Implementation of the scatter correction algorithm on real CBCT data was shown to help mitigate scatter-induced artifacts, such as cupping and streaking. The scatter corrected images were also shown to have improved accuracy in reconstructed attenuation coefficient values. In three regions of interest centered on material inserts in the ACR phantom, the reconstructed CT numbers agreed with clinical CT scan data to within 35 Hounsfield units after scatter correction. These results suggest that the proposed scatter correction algorithm is successful in improving image quality in real CBCT images. The accuracy of the attenuation coefficients extracted from the corrected CBCT scan renders the data suitable for adaptive on the fly dose calculations on individual fractions, as well as vastly improved image registration. © 2012 American Association of Physicists in Medicine.
Agasthya, G A; Harrawood, B C; Shah, J P; Kapadia, A J
2012-01-07
Neutron stimulated emission computed tomography (NSECT) is being developed as a non-invasive imaging modality to detect and quantify iron overload in the human liver. NSECT uses gamma photons emitted by the inelastic interaction between monochromatic fast neutrons and iron nuclei in the body to detect and quantify the disease. Previous simulated and physical experiments with phantoms have shown that NSECT has the potential to accurately diagnose iron overload with reasonable levels of radiation dose. In this work, we describe the results of a simulation study conducted to determine the sensitivity of the NSECT system for hepatic iron quantification in patients of different sizes. A GEANT4 simulation of the NSECT system was developed with a human liver and two torso sizes corresponding to small and large patients. The iron concentration in the liver ranged between 0.5 and 20 mg g(-1), corresponding to clinically reported iron levels in iron-overloaded patients. High-purity germanium gamma detectors were simulated to detect the emitted gamma spectra, which were background corrected using suitable water phantoms and analyzed to determine the minimum detectable level (MDL) of iron and the sensitivity of the NSECT system. These analyses indicate that for a small patient (torso major axis = 30 cm) the MDL is 0.5 mg g(-1) and sensitivity is ∼13 ± 2 Fe counts/mg/mSv and for a large patient (torso major axis = 40 cm) the values are 1 mg g(-1) and ∼5 ± 1 Fe counts/mg/mSv, respectively. The results demonstrate that the MDL for both patient sizes lies within the clinically significant range for human iron overload.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic
NASA Astrophysics Data System (ADS)
Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-08-01
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT
Li, Ruochen; Englehardt, James D; Li, Xiaoguang
2012-02-01
Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.
Gifford, Kent A. . E-mail: kagifford@mail.mdanderson.org; Horton, John L.; Pelloski, Christopher E.; Jhingran, Anuja; Court, Laurence E.; Eifel, Patricia J.
2005-10-01
Purpose: To determine the effects of Fletcher Suit Delclos ovoid shielding on dose to the bladder and rectum during intracavitary radiotherapy for cervical cancer. Methods and Materials: The Monte Carlo method was used to calculate the dose in 12 patients receiving low-dose-rate intracavitary radiotherapy with both shielded and unshielded ovoids. Cumulative dose-difference surface histograms were computed for the bladder and rectum. Doses to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose were computed for the bladder and rectum with and without shielding. Results: Shielding affected dose to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose for the rectum (10.1% and 11.1% differences, respectively). Shielding did not have a major impact on the dose to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose for the bladder. The average dose reduction to 5% of the surface area of the bladder was 53 cGy. Reductions as large as 150 cGy were observed to 5% of the surface area of the bladder. The average dose reduction to 5% of the surface area of the rectum was 195 cGy. Reductions as large as 405 cGy were observed to 5% of the surface area of the rectum. Conclusions: Our data suggest that the ovoid shields can greatly reduce the radiation dose delivered to the rectum. We did not find the same degree of effect on the dose to the bladder. To calculate the dose accurately, however, the ovoid shields must be included in the dose model.
Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-07-17
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT
Sterpin, E; Mackie, T R; Vynckier, S
2012-07-01
To determine k(Q(msr),Q(o) ) (f(msr),f(o) ) correction factors for machine-specific reference (msr) conditions by Monte Carlo (MC) simulations for reference dosimetry of TomoTherapy static beams for ion chambers Exradin A1SL, A12; PTW 30006, 31010 Semiflex, 31014 PinPoint, 31018 microLion; NE 2571. For the calibration of TomoTherapy units, reference conditions specified in current codes of practice like IAEA∕TRS-398 and AAPM∕TG-51 cannot be realized. To cope with this issue, Alfonso et al. [Med. Phys. 35, 5179-5186 (2008)] described a new formalism introducing msr factors k(Q(msr),Q(o) ) (f(msr),f(o) ) for reference dosimetry, applicable to static TomoTherapy beams. In this study, those factors were computed directly using MC simulations for Q(0) corresponding to a simplified (60)Co beam in TRS-398 reference conditions (at 10 cm depth). The msr conditions were a 10 × 5 cm(2) TomoTherapy beam, source-surface distance of 85 cm and 10 cm depth. The chambers were modeled according to technical drawings using the egs++ package and the MC simulations were run with the egs_chamber user code. Phase-space files used as the source input were produced using PENELOPE after simulation of a simplified (60)Co beam and the TomoTherapy treatment head modeled according to technical drawings. Correlated sampling, intermediate phase-space storage, and photon cross-section enhancement variance reduction techniques were used. The simulations were stopped when the combined standard uncertainty was below 0.2%. Computed k(Q(msr),Q(o) ) (f(msr),f(o) ) values were all close to one, in a range from 0.991 for the PinPoint chamber to 1.000 for the Exradin A12 with a statistical uncertainty below 0.2%. Considering a beam quality Q defined as the TPR(20,10) for a 6 MV Elekta photon beam (0.661), the additional correction k(Q(msr,)Q) (f(msr,)f(ref) ) to k(Q,Q(o) ) defined in Alfonso et al. [Med. Phys. 35, 5179-5186 (2008)] formalism was in a range from 0.997 to 1.004. The MC computed factors
Bujila, R; Nowik, P; Poludniowski, G
2014-06-01
Purpose: ImpactMC (CT Imaging, Erlangen, Germany) is a Monte Carlo (MC) software package that offers a GPU enabled, user definable and validated method for 3D dose distribution calculations for radiography and Computed Tomography (CT). ImpactMC, in and of itself, offers limited capabilities to perform batch simulations. The aim of this work was to develop a framework for the batch simulation of absorbed organ dose distributions from CT scans of computational voxel phantoms. Methods: The ICRP 110 adult Reference Male and Reference Female computational voxel phantoms were formatted into compatible input volumes for MC simulations. A Matlab (The MathWorks Inc., Natick, MA) script was written to loop through a user defined set of simulation parameters and 1) generate input files required for the simulation, 2) start the MC simulation, 3) segment the absorbed dose for organs in the simulated dose volume and 4) transfer the organ doses to a database. A demonstration of the framework is made where the glandular breast dose to the adult Reference Female phantom, for a typical Chest CT examination, is investigated. Results: A batch of 48 contiguous simulations was performed with variations in the total collimation and spiral pitch. The demonstration of the framework showed that the glandular dose to the right and left breast will vary depending on the start angle of rotation, total collimation and spiral pitch. Conclusion: The developed framework provides a robust and efficient approach to performing a large number of user defined MC simulations with computational voxel phantoms in CT (minimal user interaction). The resulting organ doses from each simulation can be accessed through a database which greatly increases the ease of analyzing the resulting organ doses. The framework developed in this work provides a valuable resource when investigating different dose optimization strategies in CT.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Jones, Bernard L; Cho, Sang Hyun
2011-06-21
A recent study investigated the feasibility to develop a bench-top x-ray fluorescence computed tomography (XFCT) system capable of determining the spatial distribution and concentration of gold nanoparticles (GNPs) in vivo using a diagnostic energy range polychromatic (i.e. 110 kVp) pencil-beam source. In this follow-up study, we examined the feasibility of a polychromatic cone-beam implementation of XFCT by Monte Carlo (MC) simulations using the MCNP5 code. In the current MC model, cylindrical columns with various sizes (5-10 mm in diameter) containing water loaded with GNPs (0.1-2% gold by weight) were inserted into a 5 cm diameter cylindrical polymethyl methacrylate phantom. The phantom was then irradiated by a lead-filtered 110 kVp x-ray source, and the resulting gold fluorescence and Compton-scattered photons were collected by a series of energy-sensitive tallies after passing through lead parallel-hole collimators. A maximum-likelihood iterative reconstruction algorithm was implemented to reconstruct the image of GNP-loaded objects within the phantom. The effects of attenuation of both the primary beam through the phantom and the gold fluorescence photons en route to the detector were corrected during the image reconstruction. Accurate images of the GNP-containing phantom were successfully reconstructed for three different phantom configurations, with both spatial distribution and relative concentration of GNPs well identified. The pixel intensity of regions containing GNPs was linearly proportional to the gold concentration. The current MC study strongly suggests the possibility of developing a bench-top, polychromatic, cone-beam XFCT system for in vivo imaging.
Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy
2016-03-11
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Sterpin, E; Mackie, T R; Vynckier, S
2012-07-01
To determinekQmsr,Qofmsr,fo correction factors for machine-specific reference (msr) conditions by Monte Carlo (MC) simulations for reference dosimetry of TomoTherapy static beams for ion chambers Exradin A1SL, A12; PTW 30006, 31010 Semiflex, 31014 PinPoint, 31018 microLion; NE 2571. For the calibration of TomoTherapy units, reference conditions specified in current codes of practice like IAEA/TRS-398 and AAPM/TG-51 cannot be realized. To cope with this issue, Alfonso et al. [Med. Phys., - (2008)] described a new formalism introducing msr factors kQmsr,Qofmsr,fo for reference dosimetry, applicable to static TomoTherapy beams. In this study, those factors were computed directly using MC simulations for Q0 corresponding to a simplified (60) Co beam in TRS-398 reference conditions (at 10 cm depth). The msr conditions were a 10 × 5 cm(2) TomoTherapy beam, source-surface distance of 85 cm and 10 cm depth. The chambers were modeled according to technical drawings using the egs++ package and the MC simulations were run with the egs_chamber user code. Phase-space files used as the source input were produced using PENELOPE after simulation of a simplified (60) Co beam and the TomoTherapy treatment head modeled according to technical drawings. Correlated sampling, intermediate phase-space storage, and photon cross-section enhancement variance reduction techniques were used. The simulations were stopped when the combined standard uncertainty was below 0.2%. ComputedkQmsr,Qofmsr,fo values were all close to one, in a range from 0.991 for the PinPoint chamber to 1.000 for the Exradin A12 with a statistical uncertainty below 0.2%. Considering a beam quality Q defined as the TPR20,10 for a 6 MV Elekta photon beam (0.661), the additional correction kQmsr,Qfmsr,fref to kQ,Qo defined in Alfonso et al. [Med. Phys., - (2008)] formalism was in a range from 0.997 to 1.004. The MC computed factors in this study are in agreement with measured factors for chamber types already studied in
Côté, Nicolas; Bedwani, Stéphane; Carrier, Jean-François
2016-05-01
An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I H/L) to generate a new image (I Mix). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I Mix was calculated and shifted toward the mean HU of the two original DECT images (I H/L). The third step involved smoothing the newly shifted I Mix and the two original I H/L, followed by a subtraction of both, generating an image that represented the metallic artifact (I A,(H/L)) of reduced noise levels. The final step consisted of subtracting the original I H/L from the newly generated I A,(H/L) and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρe) and effective atomic number (Z eff) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρe and Z eff, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D 90 dose parameter (volume receiving 90% of the dose) indicated that D 90 could be underestimated by up to 2
NASA Astrophysics Data System (ADS)
Ding, Aiping; Mille, Matthew M.; Liu, Tianyu; Caracappa, Peter F.; Xu, X. George
2012-05-01
Although it is known that obesity has a profound effect on x-ray computed tomography (CT) image quality and patient organ dose, quantitative data describing this relationship are not currently available. This study examines the effect of obesity on the calculated radiation dose to organs and tissues from CT using newly developed phantoms representing overweight and obese patients. These phantoms were derived from the previously developed RPI-adult male and female computational phantoms. The result was a set of ten phantoms (five males, five females) with body mass indexes ranging from 23.5 (normal body weight) to 46.4 kg m-2 (morbidly obese). The phantoms were modeled using triangular mesh geometry and include specified amounts of the subcutaneous adipose tissue and visceral adipose tissue. The mesh-based phantoms were then voxelized and defined in the Monte Carlo N-Particle Extended code to calculate organ doses from CT imaging. Chest-abdomen-pelvis scanning protocols for a GE LightSpeed 16 scanner operating at 120 and 140 kVp were considered. It was found that for the same scanner operating parameters, radiation doses to organs deep in the abdomen (e.g., colon) can be up to 59% smaller for obese individuals compared to those of normal body weight. This effect was found to be less significant for shallow organs. On the other hand, increasing the tube potential from 120 to 140 kVp for the same obese individual resulted in increased organ doses by as much as 56% for organs within the scan field (e.g., stomach) and 62% for those out of the scan field (e.g., thyroid), respectively. As higher tube currents are often used for larger patients to maintain image quality, it was of interest to quantify the associated effective dose. It was found from this study that when the mAs was doubled for the obese level-I, obese level-II and morbidly-obese phantoms, the effective dose relative to that of the normal weight phantom increased by 57%, 42% and 23%, respectively. This set
Lee, C; Badal, A
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
Ding, Aiping; Mille, Matthew M; Liu, Tianyu; Caracappa, Peter F; Xu, X George
2012-01-01
Although it is known that obesity has a profound effect on x-ray computed tomography (CT) image quality and patient organ dose, quantitative data describing this relationship are not currently available. This study examines the effect of obesity on the calculated radiation dose to organs and tissues from CT using newly developed phantoms representing overweight and obese patients. These phantoms were derived from the previously developed RPI-adult male and female computational phantoms. The result was a set of ten phantoms (five males, five females) with body mass indexes ranging from 23.5 (normal body weight) to 46.4 kg m−2 (morbidly obese). The phantoms were modeled using triangular mesh geometry and include specified amounts of the subcutaneous adipose tissue and visceral adipose tissue. The mesh-based phantoms were then voxelized and defined in the Monte Carlo N-Particle Extended code to calculate organ doses from CT imaging. Chest–abdomen–pelvis scanning protocols for a GE LightSpeed 16 scanner operating at 120 and 140 kVp were considered. It was found that for the same scanner operating parameters, radiation doses to organs deep in the abdomen (e.g., colon) can be up to 59% smaller for obese individuals compared to those of normal body weight. This effect was found to be less significant for shallow organs. On the other hand, increasing the tube potential from 120 to 140 kVp for the same obese individual resulted in increased organ doses by as much as 56% for organs within the scan field (e.g., stomach) and 62% for those out of the scan field (e.g., thyroid), respectively. As higher tube currents are often used for larger patients to maintain image quality, it was of interest to quantify the associated effective dose. It was found from this study that when the mAs was doubled for the obese level-I, obese level-II and morbidly-obese phantoms, the effective dose relative to that of the normal weight phantom increased by 57%, 42% and 23%, respectively
Côté, Nicolas; Bedwani, Stéphane; Carrier, Jean-François
2016-05-15
Purpose: An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). Methods: The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I {sub H/L}) to generate a new image (I {sub Mix}). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I {sub Mix} was calculated and shifted toward the mean HU of the two original DECT images (I {sub H/L}). The third step involved smoothing the newly shifted I {sub Mix} and the two original I {sub H/L}, followed by a subtraction of both, generating an image that represented the metallic artifact (I {sub A,(H/L)}) of reduced noise levels. The final step consisted of subtracting the original I {sub H/L} from the newly generated I {sub A,(H/L)} and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρ{sub e}) and effective atomic number (Z {sub eff}) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρ{sub e} and Z {sub eff}, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. Results: An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D
NASA Astrophysics Data System (ADS)
Ghosh, Karabi
2017-02-01
We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (a Tr4 - aT4) cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.
NASA Astrophysics Data System (ADS)
Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung
2012-08-01
The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is
Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung
2012-08-07
The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
USDA-ARS?s Scientific Manuscript database
A model to simulate radiative transfer (RT) of sun-induced chlorophyll fluorescence (SIF) of three-dimensional (3-D) canopy, FluorWPS, was proposed and evaluated. The inclusion of fluorescence excitation was implemented with the ‘weight reduction’ and ‘photon spread’ concepts based on Monte Carlo ra...
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
State-of-the-art Monte Carlo 1988
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Petroccia, H; Bolch, W; Li, Z; Mendenhall, N
2015-06-15
Purpose: Mean organ doses from structures located in field and outside of field boundaries during radiotherapy treatment must be considered when looking at secondary effects. Treatment planning in patients with 40 years of follow-up does not include 3-D treatment planning images and did not estimate dose to structures out of the direct field. Therefore, it is of interest to correlate actual clinical events with doses received. Methods: Accurate models of radiotherapy machines combined with whole body computational phantoms using Monte Carlo methods allow for dose reconstructions intended for studies on late radiation effects. The Theratron-780 radiotherapy unit and anatomically realistic hybrid computational phantoms are modeled in the Monte Carlo radiation transport code MCNPX. The major components of the machine including the source capsule, lead in the unit-head, collimators (fixed/adjustable), and trimmer bars are simulated. The MCNPX transport code is used to compare calculated values in a water phantom with published data from BJR suppl. 25 for in-field doses and experimental data from AAPM Task Group No. 36 for out-of-field doses. Next, the validated cobalt-60 teletherapy model is combined with the UF/NCI Family of Reference Hybrid Computational Phantoms as a methodology for estimating organ doses. Results: The model of Theratron-780 has shown to be agree with percentage depth dose data within approximately 1% and for out of field doses the machine is shown to agree within 8.8%. Organ doses are reported for reference hybrid phantoms. Conclusion: Combining the UF/NCI Family of Reference Hybrid Computational Phantoms along with a validated model of the Theratron-780 allows for organ dose estimates of both in-field and out-of-field organs. By changing field size, position, and adding patient-specific blocking more complicated treatment set-ups can be recreated for patients treated historically, particularly those who lack both 2D/3D image sets.
Moskvin, V; Cheng, C; Anferov, V; Nichiporov, D; Zhao, Q; Takashina, M; Parola, R; Das, I
2012-06-01
Charged particle therapy, especially proton therapy is a growing treatment modality worldwide. Monte Carlo (MC) simulation of the interactions of proton beam with equipment, devices and patient is a highly efficient tool that can substitute measurements for complex and unrealistic experiments. The purpose of this study is to design a MC model of a treatment nozzle to characterize the proton scanning beam and commissioning the model for the Indiana University Health Proton Therapy Center (IUHPTC. The general purpose Monte Carlo code FLUKA was used for simulation of the proton beam passage through the elements of the treatment nozzle design. The geometry of the nozzle was extracted from the design blueprints. The initial parameters for beam simulation were determined from calculations of beam optics design to derive a semi-empirical model to describe the initial parameters of the beam entering the nozzle. The lateral fluence and energy distribution of the beam entering the nozzle is defined as a function of the requested range. The uniform scanning model at the IUHPTC is implemented. The results of simulation with the beam and nozzle model are compared and verified with measurements. The lateral particle distribution and energy spectra of the proton beam entering the nozzle were compared with measurements in the interval of energies from 70 MeV to 204.8 MeV. The accuracy of the description of the proton beam by MC simulation is better than 2% compared with measurements, providing confidence for complex simulation in phantom and patient dosimetry with the MC simulated nozzle and the uniform scanning proton beam. The treatment nozzle and beam model was accurately implemented in the FLUKA Monte Carlo code and suitable for the research purpose to simulate the scanning beam at IUHPTC. © 2012 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Kikuchi, K.; Barakat, A.; St-Maurice, J.-P.
1989-01-01
Monte Carlo simulations of ion velocity distributions in the high-latitude F region have been performed in order to improve the calculation of incoherent radar spectra in the auroral ionosphere. The results confirm that when the ion temperature becomes large due to frictional heating in the presence of collisions with the neutral background constituent, F region spectra evolve from a normal double hump, to a triple hump, to a spectrum with a single maximum. An empirical approach is developed to overcome the inadequacy of the Maxwellian assumption for the case of radar aspect angles of between 30 and 70 deg.
NASA Technical Reports Server (NTRS)
Kikuchi, K.; Barakat, A.; St-Maurice, J.-P.
1989-01-01
Monte Carlo simulations of ion velocity distributions in the high-latitude F region have been performed in order to improve the calculation of incoherent radar spectra in the auroral ionosphere. The results confirm that when the ion temperature becomes large due to frictional heating in the presence of collisions with the neutral background constituent, F region spectra evolve from a normal double hump, to a triple hump, to a spectrum with a single maximum. An empirical approach is developed to overcome the inadequacy of the Maxwellian assumption for the case of radar aspect angles of between 30 and 70 deg.
Riboldi, M.; Chen, G. T. Y.; Baroni, G.; Paganetti, H.; Seco, J.
2015-01-01
We have designed a simulation framework for motion studies in radiation therapy by integrating the anthropomorphic NCAT phantom into a 4D Monte Carlo dose calculation engine based on DPM. Representing an artifact-free environment, the system can be used to identify class solutions as a function of geometric and dosimetric parameters. A pilot dynamic conformal study for three lesions (~ 2.0 cm) in the right lung was performed (70 Gy prescription dose). Tumor motion changed as a function of tumor location, according to the anthropomorphic deformable motion model. Conformal plans were simulated with 0 to 2 cm margin for the aperture, with additional 0.5 cm for beam penumbra. The dosimetric effects of intensity modulated radiotherapy (IMRT) vs. conformal treatments were compared in a static case. Results show that the Monte Carlo simulation framework can model tumor tracking in deformable anatomy with high accuracy, providing absolute doses for IMRT and conformal radiation therapy. A target underdosage of up to 3.67 Gy (lower lung) was highlighted in the composite dose distribution mapped at exhale. Such effects depend on tumor location and treatment margin and are affected by lung deformation and ribcage motion. In summary, the complexity in the irradiation of moving targets has been reduced to a controlled simulation environment, where several treatment options can be accurately modeled and quantified The implemented tools will be utilized for extensive motion study in lung/liver irradiation. PMID:19044324
Riboldi, M; Chen, G T Y; Baroni, G; Paganetti, H; Seco, J
2008-12-01
We have designed a simulation framework for motion studies in radiation therapy by integrating the anthropomorphic NCAT phantom into a 4D Monte Carlo dose calculation engine based on DPM. Representing an artifact-free environment, the system can be used to identify class solutions as a function of geometric and dosimetric parameters. A pilot dynamic conformal study for three lesions ( approximately 2.0 cm) in the right lung was performed (70 Gy prescription dose). Tumor motion changed as a function of tumor location, according to the anthropomorphic deformable motion model. Conformal plans were simulated with 0 to 2 cm margin for the aperture, with additional 0.5 cm for beam penumbra. The dosimetric effects of intensity modulated radiotherapy (IMRT) vs. conformal treatments were compared in a static case. Results show that the Monte Carlo simulation framework can model tumor tracking in deformable anatomy with high accuracy, providing absolute doses for IMRT and conformal radiation therapy. A target underdosage of up to 3.67 Gy (lower lung) was highlighted in the composite dose distribution mapped at exhale. Such effects depend on tumor location and treatment margin and are affected by lung deformation and ribcage motion. In summary, the complexity in the irradiation of moving targets has been reduced to a controlled simulation environment, where several treatment options can be accurately modeled and quantified The implemented tools will be utilized for extensive motion study in lung/liver irradiation.
Yazdi, Hossein Salehi; Shamsaei, Mojtaba; Jaberi, Ramin; Shabani, Hamid Reza; Allahverdi, Mahmoud; Vaezzadeh, Seyed Ali
2012-01-01
This study investigates to what extent the dose received by lungs from a commercially available treatment planning system, Ir-192 high-dose-rate (HDR), in breast brachytherapy, is accurate, with the emphasis on tissue heterogeneities, and taking into account the presence of ribs, in dose delivery to the lung. A computed tomography (CT) scan of a breast was acquired and transferred to the 3-D treatment planning system and was also used to construct a patient-equivalent phantom. An implant involving 13 plastic catheters and 383 programmed source dwell positions were simulated, using the Monte Carlo N-Particle eXtended (MCNPX) code. The Monte Carlo calculations were compared with the corresponding commercial treatment planning system (TPS) in the form of percentage isodose and cumulative dose-volume histogram (DVH) in the breast, lungs, and ribs. The comparison of the Monte Carlo results and the TPS calculations showed that a percentage of isodose greater than 75% in the breast, which was located rather close to the implant or away from the breast curvature surface and lung boundary, were in good agreement. TPS calculations overestimated the dose to the lung for lower isodose contours that were lying near the breast surface and the boundary of breast and lung and were relatively away from the implant. Taking into account the ribs and entering the actual data for breasts, ribs, and lungs, revealed an average overestimation of the dose by a factor of 8% in the lung for TPS calculations. Therefore, the accuracy of the TPS results may be limited to regions near the implants where the treatment is planned, and is a more conservative approach for regions at boundaries with curvatures or tissues with a different material than that in the breast.
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-06-15
Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in a typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.
Abuhaimed, Abdullah; J Martin, Colin; Sankaralingam, Marimuthu; J Gentle, David; McJury, Mark
2014-11-07
The IEC has introduced a practical approach to overcome shortcomings of the CTDI100 for measurements on wide beams employed for cone beam (CBCT) scans. This study evaluated the efficiency of this approach (CTDIIEC) for different arrangements using Monte Carlo simulation techniques, and compared CTDIIEC to the efficiency of CTDI100 for CBCT. Monte Carlo EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc codes were used to simulate the kV imaging system mounted on a Varian TrueBeam linear accelerator. The Monte Carlo model was benchmarked against experimental measurements and good agreement shown. Standard PMMA head and body phantoms with lengths 150, 600, and 900 mm were simulated. Beam widths studied ranged from 20-300 mm, and four scanning protocols using two acquisition modes were utilized. The efficiency values were calculated at the centre (εc) and periphery (εp) of the phantoms and for the weighted CTDI (εw). The efficiency values for CTDI100 were approximately constant for beam widths 20-40 mm, where εc(CTDI100), εp(CTDI100), and εw(CTDI100) were 74.7 ± 0.6%, 84.6 ± 0.3%, and 80.9 ± 0.4%, for the head phantom and 59.7 ± 0.3%, 82.1 ± 0.3%, and 74.9 ± 0.3%, for the body phantom, respectively. When beam width increased beyond 40 mm, ε(CTDI100) values fell steadily reaching ~30% at a beam width of 300 mm. In contrast, the efficiency of the CTDIIEC was approximately constant over all beam widths, demonstrating its suitability for assessment of CBCT. εc(CTDIIEC), εp(CTDIIEC), and εw(CTDIIEC) were 76.1 ± 0.9%, 85.9 ± 1.0%, and 82.2 ± 0.9% for the head phantom and 60.6 ± 0.7%, 82.8 ± 0.8%, and 75.8 ± 0.7%, for the body phantom, respectively, within 2% of ε(CTDI100) values for narrower beam widths. CTDI100,w and CTDIIEC,w underestimate CTDI∞,w by ~55% and ~18% for the head phantom and by ~56% and ~24% for the body phantom, respectively, using a clinical beam width 198 mm. The
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
NASA Astrophysics Data System (ADS)
Cochran, Thomas
2007-04-01
In 2002 and again in 2003, an investigative journalist unit at ABC News transported a 6.8 kilogram metallic slug of depleted uranium (DU) via shipping container from Istanbul, Turkey to Brooklyn, NY and from Jakarta, Indonesia to Long Beach, CA. Targeted inspection of these shipping containers by Department of Homeland Security (DHS) personnel, included the use of gamma-ray imaging, portal monitors and hand-held radiation detectors, did not uncover the hidden DU. Monte Carlo analysis of the gamma-ray intensity and spectrum of a DU slug and one consisting of highly-enriched uranium (HEU) showed that DU was a proper surrogate for testing the ability of DHS to detect the illicit transport of HEU. Our analysis using MCNP-5 illustrated the ease of fully shielding an HEU sample to avoid detection. The assembly of an Improvised Nuclear Device (IND) -- a crude atomic bomb -- from sub-critical pieces of HEU metal was then examined via Monte Carlo criticality calculations. Nuclear explosive yields of such an IND as a function of the speed of assembly of the sub-critical HEU components were derived. A comparison was made between the more rapid assembly of sub-critical pieces of HEU in the ``Little Boy'' (Hiroshima) weapon's gun barrel and gravity assembly (i.e., dropping one sub-critical piece of HEU on another from a specified height). Based on the difficulty of detection of HEU and the straightforward construction of an IND utilizing HEU, current U.S. government policy must be modified to more urgently prioritize elimination of and securing the global inventories of HEU.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Roos, M.; Bansmann, J.; Behm, R. J.; Zhang, D.; Deutschmann, O.
2010-09-07
The transport and distribution of reaction products above catalytically active Pt microstructures was studied by spatially resolved scanning mass spectrometry (SMS) in combination with Monte Carlo simulation and fluid dynamics calculations, using the oxidation of CO as test reaction. The spatial gas distribution above the Pt fields was measured via a thin quartz capillary connected to a mass spectrometer. Measurements were performed in two different pressure regimes, being characteristic for ballistic mass transfer and diffusion involving multiple collisions for the motion of CO{sub 2} product molecules between the sample and the capillary tip, and using differently sized and shaped Pt microstructures. The tip height dependent lateral resolution of the SMS measurements as well as contributions from shadowing effects, due to the mass transport limitations between capillary tip and sample surface at close separations, were evaluated and analyzed. The data allow to define measurement and reaction conditions where effects induced by the capillary tip can be neglected (''minimal invasive measurements'') and provide a basis for the evaluation of catalyst activities on microstructured model systems, e.g., for catalyst screening or studies of transport effects.
Morant, JJ; Salvadó, M; Hernández-Girón, I; Casanovas, R; Ortega, R; Calzado, A
2013-01-01
Objectives: The aim of this study was to calculate organ and effective doses for a range of available protocols in a particular cone beam CT (CBCT) scanner dedicated to dentistry and to derive effective dose conversion factors. Methods: Monte Carlo simulations were used to calculate organ and effective doses using the International Commission on Radiological Protection voxel adult male and female reference phantoms (AM and AF) in an i-CAT CBCT. Nine different fields of view (FOVs) were simulated considering full- and half-rotation modes, and also a high-resolution acquisition for a particular protocol. Dose–area product (DAP) was measured. Results: Dose to organs varied for the different FOVs, usually being higher in the AF phantom. For 360°, effective doses were in the range of 25–66 μSv, and 46 μSv for full head. Higher contributions to the effective dose corresponded to the remainder (31%; 27–36 range), salivary glands (23%; 20–29%), thyroid (13%; 8–17%), red bone marrow (10%; 9–11%) and oesophagus (7%; 4–10%). The high-resolution protocol doubled the standard resolution doses. DAP values were between 181 mGy cm2 and 556 mGy cm2 for 360°. For 180° protocols, dose to organs, effective dose and DAP were approximately 40% lower. A conversion factor (DAP to effective dose) of 0.130 ± 0.006 μSv mGy−1 cm−2 was derived for all the protocols, excluding full head. A wide variation in dose to eye lens and thyroid was found when shifting the FOV in the AF phantom. Conclusions: Organ and effective doses varied according to field size, acquisition angle and positioning of the beam relative to radiosensitive organs. Good positive correlation between calculated effective dose and measured DAP was found. PMID:22933532
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
Morant, J J; Salvadó, M; Hernández-Girón, I; Casanovas, R; Ortega, R; Calzado, A
2013-01-01
The aim of this study was to calculate organ and effective doses for a range of available protocols in a particular cone beam CT (CBCT) scanner dedicated to dentistry and to derive effective dose conversion factors. Monte Carlo simulations were used to calculate organ and effective doses using the International Commission on Radiological Protection voxel adult male and female reference phantoms (AM and AF) in an i-CAT CBCT. Nine different fields of view (FOVs) were simulated considering full- and half-rotation modes, and also a high-resolution acquisition for a particular protocol. Dose-area product (DAP) was measured. Dose to organs varied for the different FOVs, usually being higher in the AF phantom. For 360°, effective doses were in the range of 25-66 μSv, and 46 μSv for full head. Higher contributions to the effective dose corresponded to the remainder (31%; 27-36 range), salivary glands (23%; 20-29%), thyroid (13%; 8-17%), red bone marrow (10%; 9-11%) and oesophagus (7%; 4-10%). The high-resolution protocol doubled the standard resolution doses. DAP values were between 181 mGy cm(2) and 556 mGy cm(2) for 360°. For 180° protocols, dose to organs, effective dose and DAP were approximately 40% lower. A conversion factor (DAP to effective dose) of 0.130 ± 0.006 μSv mGy(-1) cm(-2) was derived for all the protocols, excluding full head. A wide variation in dose to eye lens and thyroid was found when shifting the FOV in the AF phantom. Organ and effective doses varied according to field size, acquisition angle and positioning of the beam relative to radiosensitive organs. Good positive correlation between calculated effective dose and measured DAP was found.
ERIC Educational Resources Information Center
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.
2010-01-01
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Monte Carlo Methods in the Physical Sciences
Kalos, M H
2007-06-06
I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X George
2014-07-01
Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George
2014-01-01
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Abuhaimed, Abdullah; Martin, Colin J; Sankaralingam, Marimuthu; Gentle, David J
2015-07-21
A function called Gx(L) was introduced by the International Commission on Radiation Units and Measurements (ICRU) Report-87 to facilitate measurement of cumulative dose for CT scans within long phantoms as recommended by the American Association of Physicists in Medicine (AAPM) TG-111. The Gx(L) function is equal to the ratio of the cumulative dose at the middle of a CT scan to the volume weighted CTDI (CTDIvol), and was investigated for conventional multi-slice CT scanners operating with a moving table. As the stationary table mode, which is the basis for cone beam CT (CBCT) scans, differs from that used for conventional CT scans, the aim of this study was to investigate the extension of the Gx(L) function to CBCT scans. An On-Board Imager (OBI) system integrated with a TrueBeam linac was simulated with Monte Carlo EGSnrc/BEAMnrc, and the absorbed dose was calculated within PMMA, polyethylene (PE), and water head and body phantoms using EGSnrc/DOSXYZnrc, where the body PE body phantom emulated the ICRU/AAPM phantom. Beams of width 40-500 mm and beam qualities at tube potentials of 80-140 kV were studied. Application of a modified function of beam width (W) termed Gx(W), for which the cumulative dose for CBCT scans f (0) is normalized to the weighted CTDI (CTDIw) for a reference beam of width 40 mm, was investigated as a possible option. However, differences were found in Gx(W) with tube potential, especially for body phantoms, and these were considered to be due to differences in geometry between wide beams used for CBCT scans and those for conventional CT. Therefore, a modified function Gx(W)100 has been proposed, taking the form of values of f (0) at each position in a long phantom, normalized with respect to dose indices f 100(150)x measured with a 100 mm pencil ionization chamber within standard 150 mm PMMA phantoms, using the same scanning parameters, beam widths and positions within the phantom. f 100(150)x averages the dose resulting from
NASA Astrophysics Data System (ADS)
Abuhaimed, Abdullah; Martin, Colin J.; Sankaralingam, Marimuthu; Gentle, David J.
2015-07-01
A function called Gx(L) was introduced by the International Commission on Radiation Units and Measurements (ICRU) Report-87 to facilitate measurement of cumulative dose for CT scans within long phantoms as recommended by the American Association of Physicists in Medicine (AAPM) TG-111. The Gx(L) function is equal to the ratio of the cumulative dose at the middle of a CT scan to the volume weighted CTDI (CTDIvol), and was investigated for conventional multi-slice CT scanners operating with a moving table. As the stationary table mode, which is the basis for cone beam CT (CBCT) scans, differs from that used for conventional CT scans, the aim of this study was to investigate the extension of the Gx(L) function to CBCT scans. An On-Board Imager (OBI) system integrated with a TrueBeam linac was simulated with Monte Carlo EGSnrc/BEAMnrc, and the absorbed dose was calculated within PMMA, polyethylene (PE), and water head and body phantoms using EGSnrc/DOSXYZnrc, where the body PE body phantom emulated the ICRU/AAPM phantom. Beams of width 40-500 mm and beam qualities at tube potentials of 80-140 kV were studied. Application of a modified function of beam width (W) termed Gx(W), for which the cumulative dose for CBCT scans f (0) is normalized to the weighted CTDI (CTDIw) for a reference beam of width 40 mm, was investigated as a possible option. However, differences were found in Gx(W) with tube potential, especially for body phantoms, and these were considered to be due to differences in geometry between wide beams used for CBCT scans and those for conventional CT. Therefore, a modified function Gx(W)100 has been proposed, taking the form of values of f (0) at each position in a long phantom, normalized with respect to dose indices f 100(150)x measured with a 100 mm pencil ionization chamber within standard 150 mm PMMA phantoms, using the same scanning parameters, beam widths and positions within the phantom. f 100(150)x averages the dose resulting from
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Search and Rescue Monte Carlo Simulation.
1985-03-01
confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Gujt, Jure; Bešter-Rogač, Marija; Hribar-Lee, Barbara
2013-01-01
The ion pairing is, in very dilute aqueous solutions, of rather small importance for solutions’ properties, which renders its precise quantification quite a laborious task. Here we studied the ion pairing of alkali halides in water by using the precise electric conductivity measurements in dilute solutions, and in a wide temperature range. The low-concentration chemical model was used to analyze the results, and to estimate the association constant of different alkali halide salts. It has been shown that the association constant is related to the solubility of salts in water and produces a ’volcano relationship’, when plotted against the difference between the free energy of hydration of the corresponding individual ions. The computer simulation, using the simple MB+dipole water model, were used to interprete the results, to find a microscopic basis for Collins’ law of matching water affinities. PMID:24526801
Gujt, Jure; Bešter-Rogač, Marija; Hribar-Lee, Barbara
2014-02-01
The ion pairing is, in very dilute aqueous solutions, of rather small importance for solutions' properties, which renders its precise quantification quite a laborious task. Here we studied the ion pairing of alkali halides in water by using the precise electric conductivity measurements in dilute solutions, and in a wide temperature range. The low-concentration chemical model was used to analyze the results, and to estimate the association constant of different alkali halide salts. It has been shown that the association constant is related to the solubility of salts in water and produces a 'volcano relationship', when plotted against the difference between the free energy of hydration of the corresponding individual ions. The computer simulation, using the simple MB+dipole water model, were used to interprete the results, to find a microscopic basis for Collins' law of matching water affinities.
Matsunaga, Yuta; Kawaguchi, Ai; Kobayashi, Masanao; Suzuki, Shigetaka; Suzuki, Shoichi; Chida, Koichi
2016-09-19
The purposes of this study were (1) to compare the radiation doses for 320- and 80-row fetal-computed tomography (CT), estimated using thermoluminescent dosimeters (TLDs) and the ImPACT Calculator (hereinafter referred to as the "CT dosimetry software"), for a woman in her late pregnancy and her fetus and (2) to estimate the overlapped fetal radiation dose from a 320-row CT examination using two different estimation methods of the CT dosimetry software. The direct TLD data in the present study were obtained from a previous study. The exposure parameters used for TLD measurements were entered into the CT dosimetry software, and the appropriate radiation dose for the pregnant woman and her fetus was estimated. When the whole organs (e.g., the colon, small intestine, and ovaries) and the fetus were included in the scan range, the difference in the estimated doses between the TLD measurement and the CT dosimetry software measurement was <1 mGy (<23 %) in both CT units. In addition, when the whole organs were within the scan range, the CT dosimetry software was used for evaluating the fetal radiation dose and organ-specific doses for the woman in the late pregnancy. The conventional method using the CT dosimetry software cannot take into account the overlap between volumetric sections. Therefore, the conventional method using a 320-row CT unit in a wide-volume mode might result in the underestimation of radiation doses for the fetus and the colon, small intestine, and ovaries.
Alternative implementations of Monte Carlo EM algorithms for likelihood inferences
García-Cortés, Louis Alberto; Sorensen, Daniel
2001-01-01
Two methods of computing Monte Carlo estimators of variance components using restricted maximum likelihood via the expectation-maximisation algorithm are reviewed. A third approach is suggested and the performance of the methods is compared using simulated data. PMID:11559486
Dickens, J.K.
1988-04-01
This document provides a discussion of the development of the FORTRAN Monte Carlo program SCINFUL (for scintillator full response), a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The program may also be used to compute angle-integrated spectra of charged particles (p, d, t, /sup 3/He, and ..cap alpha..) following neutron interactions with /sup 12/C. Extensive comparisons with a variety of experimental data are given. There is generally overall good agreement (<10% differences) of results from SCINFUL calculations with measured detector responses, i.e., N(E/sub r/) vs E/sub r/ where E/sub r/ is the response pulse height, reproduce measured detector responses with an accuracy which, at least partly, depends upon how well the experimental configuration is known. For E/sub n/ < 16 MeV and for E/sub r/ > 15% of the maximum pulse height response, calculated spectra are within +-5% of experiment on the average. For E/sub n/ up to 50 MeV similar good agreement is obtained with experiment for E/sub r/ > 30% of maximum response. For E/sub n/ up to 75 MeV the calculated shape of the response agrees with measurements, but the calculations underpredicts the measured response by up to 30%. 65 refs., 64 figs., 3 tabs.
Yoo, Do Hyeon; Shin, Wook-Geun; Lee, Jaekook; Yeom, Yeon Soo; Kim, Chan Hyeong; Chang, Byung-Uck; Min, Chul Hee
2017-11-01
After the Fukushima accident in Japan, the Korean Government implemented the "Act on Protective Action Guidelines Against Radiation in the Natural Environment" to regulate unnecessary radiation exposure to the public. However, despite the law which came into effect in July 2012, an appropriate method to evaluate the equivalent and effective doses from naturally occurring radioactive material (NORM) in consumer products is not available. The aim of the present study is to develop and validate an effective dose coefficient database enabling the simple and correct evaluation of the effective dose due to the usage of NORM-added consumer products. To construct the database, we used a skin source method with a computational human phantom and Monte Carlo (MC) simulation. For the validation, the effective dose was compared between the database using interpolation method and the original MC method. Our result showed a similar equivalent dose across the 26 organs and a corresponding average dose between the database and the MC calculations of < 5% difference. The differences in the effective doses were even less, and the result generally show that equivalent and effective doses can be quickly calculated with the database with sufficient accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Monte Carlo algorithms for Brownian phylogenetic models
Horvilleur, Benjamin; Lartillot, Nicolas
2014-01-01
Motivation: Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. Results: A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. Availability: The program is freely available at www.phylobayes.org Contact: nicolas.lartillot@univ-lyon1.fr PMID:25053744
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Manohar, Nivedh; Cho, Sang Hyun
2014-10-15
Purpose: To develop an accurate and comprehensive Monte Carlo (MC) model of an experimental benchtop polychromatic cone-beam x-ray fluorescence computed tomography (XFCT) setup and apply this MC model to optimize incident x-ray spectrum for improving production/detection of x-ray fluorescence photons from gold nanoparticles (GNPs). Methods: A detailed MC model, based on an experimental XFCT system, was created using the Monte Carlo N-Particle (MCNP) transport code. The model was validated by comparing MC results including x-ray fluorescence (XRF) and scatter photon spectra with measured data obtained under identical conditions using 105 kVp cone-beam x-rays filtered by either 1 mm of lead (Pb) or 0.9 mm of tin (Sn). After validation, the model was used to investigate the effects of additional filtration of the incident beam with Pb and Sn. Supplementary incident x-ray spectra, representing heavier filtration (Pb: 2 and 3 mm; Sn: 1, 2, and 3 mm) were computationally generated and used with the model to obtain XRF/scatter spectra. Quasimonochromatic incident x-ray spectra (81, 85, 90, 95, and 100 keV with 10 keV full width at half maximum) were also investigated to determine the ideal energy for distinguishing gold XRF signal from the scatter background. Fluorescence signal-to-dose ratio (FSDR) and fluorescence-normalized scan time (FNST) were used as metrics to assess results. Results: Calculated XRF/scatter spectra for 1-mm Pb and 0.9-mm Sn filters matched (r ≥ 0.996) experimental measurements. Calculated spectra representing additional filtration for both filter materials showed that the spectral hardening improved the FSDR at the expense of requiring a much longer FNST. In general, using Sn instead of Pb, at a given filter thickness, allowed an increase of up to 20% in FSDR, more prominent gold XRF peaks, and up to an order of magnitude decrease in FNST. Simulations using quasimonochromatic spectra suggested that increasing source x-ray energy, in the
Challenges of Monte Carlo Transport
Long, Alex Roberts
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Dynamically stratified Monte Carlo forecasting
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Suarez, Max; Schemm, Jae-Kyung; Epstein, Edward
1992-01-01
A new method for performing Monte Carlo forecasts is introduced. The method, called dynamic stratification, selects initial perturbations based on a stratification of the error distribution. A simple implementation is presented in which the error distribution used for the stratification is estimated from a linear model derived from a large ensemble of 12-h forecasts with the full dynamic model. The stratification thus obtained is used to choose a small subsample of initial states with which to perform the dynamical Monte Carlo forecasts. Several test cases are studied using a simple two-level general circulation model with uncertain initial conditions. It is found that the method provides substantial reductions in the sampling error of the forecast mean and variance when compared to the more traditional approach of choosing the initial perturbations at random. The degree of improvement, however, is sensitive to the nature of the initial error distribution and to the base state. In practice the method may be viable only if the computational burden involved in obtaining an adequate estimate of the error distribution is shared with the data-assimilation procedure.
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
Reconstruction of Monte Carlo replicas from Hessian parton distributions
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.
2017-03-01
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Monte Carlo study of vibrational relaxation processes
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for the computation of vibrational nonequilibrium in the direct simulation Monte Carlo method (DSMC). This model permits level to level vibrational transitions for the first time in a Monte Carlo flowfield simulation. The model follows the Landau-Teller theory for a harmonic oscillator in which the rates of transition are related to an experimental correlation for the vibrational relaxation time. The usual method for simulating such processes in the DSMC technique applies a constant exchange probability to each collision and the vibrational energy is treated as a continuum. A comparison of these two methods is made for the flow of nitrogen over a wedge. Significant differences exist for the vibrational temperatures computed. These arise as a consequence of the incorrect application of a constant exchange probability in the old method. It is found that the numerical performances of the two vibrational relaxation models are equal.
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program
ERIC Educational Resources Information Center
Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.
2004-01-01
The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…
A Monte Carlo Simulation of Brownian Motion in the Freshman Laboratory
ERIC Educational Resources Information Center
Anger, C. D.; Prescott, J. R.
1970-01-01
Describes a dry- lab" experiment for the college freshman laboratory, in which the essential features of Browian motion are given principles, using the Monte Carlo technique. Calculations principles, using the Monte Carlo technique. Calculations are carried out by a computation sheme based on computer language. Bibliography. (LC)
Díez, A; Largo, J; Solana, J R
2006-08-21
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
Eisenhaber, F; Tumanyan, V G; Eisenmenger, F; Gunia, W
1989-03-01
A computational method is elaborated for studying the water environment around regular polynucleotide duplexes; it allows rigorous structural information on the hydration shell of DNA to be obtained. The crucial aspect of this Monte Carlo simulation is the use of periodical boundary conditions. The output data consists of local maxima of water density in the space near the DNA molecule and the properties of one- and two-membered water bridges as function of pairs of polar groups of DNA. In the present paper the results for poly(dG).poly(dC) and poly(dG-dC).poly(dG-dC) are presented. The differences in their hydration shells are of a purely structural nature and are caused by the symmetry of the polar groups of the polymers under study, the symmetry being reflected by the hydration shell. The homopolymer duplex hydration shell mirrors the mononucleotide repeat. The water molecules contacting the polynucleotide in the minor groove are located nearly in the plane midway between the planes of successive base pairs. One water molecule per base pair forms a water bridge facing two polar groups of bases from adjacent base pairs and on different strands making a "spine"-like structure. In contrast, the major groove hydration is stabilized exclusively by two-membered water bridges; the water molecules deepest in the groove are concentrated near the plane of the corresponding base pair. The alternating polymer is characterized by a marked dyad symmetry of the hydration shell corresponding to the axis between two successive base pairs. The minor groove hydration of the dCpdG step resembles the characteristic features of the homopolymer, but the bridge between the O2 oxygens of the other base-stacking type is formed by two water molecules. The major groove hydration is characterized by high probability of one-membered water bridges and by localization of a water molecule on the dyad axis of the dGpdC step. The found structural elements are discussed as reasonable invariants of
Monte Carlo radiation transport: A revolution in science
Hendricks, J.
1993-04-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.
ERIC Educational Resources Information Center
Borcherds, P. H.
1986-01-01
Describes an optional course in "computational physics" offered at the University of Birmingham. Includes an introduction to numerical methods and presents exercises involving fast-Fourier transforms, non-linear least-squares, Monte Carlo methods, and the three-body problem. Recommends adding laboratory work into the course in the…
Quantum Monte Carlo for vibrating molecules
Brown, W.R. |
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
Monte Carlo-Minimization and Monte Carlo Recursion Approaches to Structure and Free Energy.
NASA Astrophysics Data System (ADS)
Li, Zhenqin
1990-08-01
Biological systems are intrinsically "complex", involving many degrees of freedom, heterogeneity, and strong interactions among components. For the simplest of biological substances, e.g., biomolecules, which obey the laws of thermodynamics, we may attempt a statistical mechanical investigational approach. Even for these simplest many -body systems, assuming microscopic interactions are completely known, current computational methods in characterizing the overall structure and free energy face the fundamental challenge of an exponential amount of computation, with the rise in the number of degrees of freedom. As an attempt to surmount such problems, two computational procedures, the Monte Carlo-minimization and Monte Carlo recursion methods, have been developed as general approaches to the determination of structure and free energy of a complex thermodynamic system. We describe, in Chapter 2, the Monte Carlo-minimization method, which attempts to simulate natural protein folding processes and to overcome the multiple-minima problem. The Monte Carlo-minimization procedure has been applied to a pentapeptide, Met-enkephalin, leading consistently to the lowest energy structure, which is most likely to be the global minimum structure for Met-enkephalin in the absence of water, given the ECEPP energy parameters. In Chapter 3 of this thesis, we develop a Monte Carlo recursion method to compute the free energy of a given physical system with known interactions, which has been applied to a 32-particle Lennard-Jones fluid. In Chapter 4, we describe an efficient implementation of the recursion procedure, for the computation of the free energy of liquid water, with both MCY and TIP4P potential parameters for water. As a further demonstration of the power of the recursion method for calculating free energy, a general formalism of cluster formation from monatomic vapor is developed in Chapter 5. The Gibbs free energy of constrained clusters can be computed efficiently using the
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Markov Chain Monte Carlo from Lagrangian Dynamics
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2014-01-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515
Monte Carlo simulation in statistical physics: an introduction
NASA Astrophysics Data System (ADS)
Binder, K., Heermann, D. W.
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.
Parallel CARLOS-3D code development
Putnam, J.M.; Kotulski, J.D.
1996-02-01
CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.
Quantum Monte Carlo for atoms and molecules
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
Kalos, M. H.; Pederiva, F.
1998-12-01
We review the fundamental challenge of fermion Monte Carlo for continuous systems, the "sign problem". We seek that eigenfunction of the many-body Schriodinger equation that is antisymmetric under interchange of the coordinates of pairs of particles. We describe methods that depend upon the use of correlated dynamics for pairs of correlated walkers that carry opposite signs. There is an algorithmic symmetry between such walkers that must be broken to create a method that is both exact and as effective as for symmetric functions, In our new method, it is broken by using different "guiding" functions for walkers of opposite signs, and a geometric correlation between steps of their walks, With a specific process of cancellation of the walkers, overlaps with antisymmetric test functions are preserved. Finally, we describe the progress in treating free-fermion systems and a fermion fluid with 14 ^{3}He atoms.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Monte Carlo simulations of medical imaging modalities
Estes, G.P.
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Drag coefficient modeling for grace using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.
2013-12-01
Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Marcus, Ryan C.
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
Quantum Monte Carlo calculations for light nuclei
Wiringa, R.B.
1997-10-01
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 have been made using a realistic Hamiltonian that fits NN scattering data. Results for more than two dozen different (J{sup {pi}}, T) p-shell states, not counting isobaric analogs, have been obtained. The known excitation spectra of all the nuclei are reproduced reasonably well. Density and momentum distributions and various electromagnetic moments and form factors have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Quantum Monte Carlo calculations for light nuclei.
Wiringa, R. B.
1998-10-23
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Quantum Monte Carlo calculations for light nuclei
Wiringa, R.B.
1998-08-01
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Quantum Monte Carlo for Molecules.
1984-11-01
AD-A148 159 QUANTUM MONTE CARLO FOR MOLECULES(U) CALIFORNIA UNIV Y BERKELEY LAWRENCE BERKELEY LAB W A LESTER ET AL. Si NOV 84 NOSUi4-83-F-Oifi...ORG. REPORT NUMBER 00 QUANTUM MONTE CARLO FOR MOLECULES ’Ids 7. AUTHOR(e) S. CONTRACT Or GRANT NUMER(e) William A. Lester, Jr. and Peter J. Reynolds...unlimited. ..’.- • p. . ° 18I- SUPPLEMENTARY NOTES IS. KEY WORDS (Cent/Rue an "Worse aide If noeesean d entlt by block fmamabr) Quantum Monte Carlo importance
Quantum Monte Carlo for Molecules.
1986-12-01
AD-Ml?? Ml SITNEt MNOTE CARLO FOR OLEC ILES U) CALIFORNIA INEZY 1/ BERWLEY LRIWENCE BERKELEY LAB NI A LESTER ET AL UKLff~j~~lD61 DEC 66 MSW14-6 .3...SUMMARY REPORT 4. PERFORMING ORG. REPORT NUMBER S QUANTUM MONTE CARLO FOR MOLECULES ___ IU . AUTHOR(@) S. CONTRACT OR GRANT NUMSKR(.) S William A...DISTRIGUTION STATIEMEN4T (at the abstract entered in Block 20. it different from Report) - Quantum Monte Carlo importance functions molchuiner eqaio
Monte Carlo Simulation Of Emission Tomography And Other Medical Imaging Techniques.
Harrison, Robert L
2010-01-05
An introduction to Monte Carlo simulation of emission tomography. This paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Perturbation Monte Carlo methods for tissue structure alterations.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome
2013-01-01
This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.
On a full Monte Carlo approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.
Miller, G; Martz, H F; Little, T T; Guilmette, R
2002-01-01
A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time.
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
An Overview of the Monte Carlo Methods, Codes, & Applications Group
Trahan, Travis John
2016-08-30
This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.
Monte Carlo Estimation of the Electric Field in Stellarators
NASA Astrophysics Data System (ADS)
Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.
1986-10-01
The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable.
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Play It Again: Teaching Statistics with Monte Carlo Simulation
ERIC Educational Resources Information Center
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
Applications of the Monte Carlo radiation transport toolkit at LLNL
NASA Astrophysics Data System (ADS)
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
The Use of Monte Carlo Techniques to Teach Probability.
ERIC Educational Resources Information Center
Newell, G. J.; MacFarlane, J. D.
1985-01-01
Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…
Play It Again: Teaching Statistics with Monte Carlo Simulation
ERIC Educational Resources Information Center
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
Quantum Monte Carlo simulation of topological phase transitions
NASA Astrophysics Data System (ADS)
Yamamoto, Arata; Kimura, Taro
2016-12-01
We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Conversation with Juan Carlos Negrete.
Negrete, Juan Carlos
2013-08-01
Juan Carlos Negrete is Emeritus Professor of Psychiatry, McGill University; Founding Director, Addictions Unit, Montreal General Hospital; former President, Canadian Society of Addiction Medicine; and former WHO/PAHO Consultant on Alcoholism, Drug Addiction and Mental Health.
Innovation Lecture Series - Carlos Dominguez
Carlos Dominguez is a Senior Vice President at Cisco Systems and a technology evangelist, speaking to and motivating audiences worldwide about how technology is changing how we communicate, collabo...
Monte Carlo docking with ubiquitin.
Cummings, M. D.; Hart, T. N.; Read, R. J.
1995-01-01
The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as
Carlos Chagas: biographical sketch.
Moncayo, Alvaro
2010-01-01
Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world
NASA Astrophysics Data System (ADS)
Thijssen, Jos
2013-10-01
1. Introduction; 2. Quantum scattering with a spherically symmetric potential; 3. The variational method for the Schrödinger equation; 4. The Hartree-Fock method; 5. Density functional theory; 6. Solving the Schrödinger equation in periodic solids; 7. Classical equilibrium statistical mechanics; 8. Molecular dynamics simulations; 9. Quantum molecular dynamics; 10. The Monte Carlo method; 11. Transfer matrix and diagonalisation of spin chains; 12. Quantum Monte Carlo methods; 13. The infinite element method for partial differential equations; 14. The lattice Boltzmann method for fluid dynamics; 15. Computational methods for lattice field theories; 16. High performance computing and parallelism; Appendix A. Numerical methods; Appendix B. Random number generators; References; Index.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Monte Carlo verification of IMRT treatment plans on grid.
Gómez, Andrés; Fernández Sánchez, Carlos; Mouriño Gallego, José Carlos; López Cacheiro, Javier; González Castaño, Francisco J; Rodríguez-Silva, Daniel; Domínguez Carrera, Lorena; González Martínez, David; Pena García, Javier; Gómez Rodríguez, Faustino; González Castaño, Diego; Pombar Cameán, Miguel
2007-01-01
The eIMRT project is producing new remote computational tools for helping radiotherapists to plan and deliver treatments. The first available tool will be the IMRT treatment verification using Monte Carlo, which is a computational expensive problem that can be executed remotely on a GRID. In this paper, the current implementation of this process using GRID and SOA technologies is presented, describing the remote execution environment and the client.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Hybrid algorithms in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.
2012-12-01
With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
A review of best practices for Monte Carlo criticality calculations
Brown, Forrest B
2009-01-01
Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.
Parallel Monte Carlo Simulation for control system design
NASA Technical Reports Server (NTRS)
Schubert, Wolfgang M.
1995-01-01
The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
NASA Astrophysics Data System (ADS)
Naraghi, M. H. N.; Chung, B. T. F.
1982-06-01
A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Monte Carlo tests of the ELIPGRID-PC algorithm
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.
Monte Carlo simulation of laser attenuation characteristics in fog
NASA Astrophysics Data System (ADS)
Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi
2011-06-01
Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.
Condensed Matter Applications of Quantum Monte Carlo at the Petascale
NASA Astrophysics Data System (ADS)
Ceperley, David
2014-03-01
Applications of the Quantum Monte Carlo method have a number of advantages allowing them to be useful for high performance computation. The method scales well in particle number, can treat complex systems with weak or strong correlation including disordered systems, and large thermal and zero point effects of the nuclei. The methods are adaptable to a variety of computer architectures and have multiple parallelization strategies. Most errors are under control so that increases in computer resources allow a systematic increase in accuracy. We will discuss a number of recent applications of Quantum Monte Carlo including dense hydrogen and transition metal systems and suggest future directions. Support from DOE grants DE-FG52-09NA29456, SCIDAC DE-SC0008692, the Network for Ab Initio Many-Body Methods and INCITE allocation.
Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations
Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C
2011-01-01
It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
TOPICAL REVIEW: Monte Carlo modelling of external radiotherapy photon beams
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Seuntjens, Jan
2003-11-01
An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
NASA Astrophysics Data System (ADS)
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-07-01
Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.
ERIC Educational Resources Information Center
Kay, Jack G.; And Others
1988-01-01
Describes two applications of the microcomputer for laboratory exercises. Explores radioactive decay using the Batemen equations on a Macintosh computer. Provides examples and screen dumps of data. Investigates polymer configurations using a Monte Carlo simulation on an IBM personal computer. (MVL)
ERIC Educational Resources Information Center
Kay, Jack G.; And Others
1988-01-01
Describes two applications of the microcomputer for laboratory exercises. Explores radioactive decay using the Batemen equations on a Macintosh computer. Provides examples and screen dumps of data. Investigates polymer configurations using a Monte Carlo simulation on an IBM personal computer. (MVL)
NASA Astrophysics Data System (ADS)
Vizoso, Sergi; Rode, Bernd M.
1995-10-01
Ab initio calculations at the Hartree-Fock Self Consistent Field (HF-SCF) level have been carried out to determine the interaction hypersurface for a sodium cation in the field of a hydroxylamine molecule. The quality of the selected wave function and basis set used in sampling the interaction energy surface of the complex has been tested and compared with alternatives. The Na +NH 2OH surface is characterized by two main minima of -24.1 and -19.3 kcal mol -1, in which the sodium cation is coordinated to oxygen and nitrogen of hydroxylamine, respectively. An analytical pair potential expression consisting of a Coulomb and several R- n terms was constructed to fit the 1352 calculated single energy points of the obtained energy surface. Subsequent Monte Carlo statistical thermodynamic simulations for a dilute solution of sodium chloride in hydroxylamine are also reported. The structure of the local solution environment around the cation is analyzed by means of radial and angular distribution functions, density maps, coordination number and energy distributions.
CosmoMC: Cosmological MonteCarlo
NASA Astrophysics Data System (ADS)
Lewis, Antony; Bridle, Sarah
2011-06-01
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.
Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald
2016-12-01
Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.
Monte Carlo Particle Lists: MCPL
NASA Astrophysics Data System (ADS)
Kittelmann, T.; Klinkby, E.; Knudsen, E. B.; Willendrup, P.; Cai, X. X.; Kanaki, K.
2017-09-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Implementation of Monte Carlo Simulations for the Gamma Knife System
NASA Astrophysics Data System (ADS)
Xiong, W.; Huang, D.; Lee, L.; Feng, J.; Morris, K.; Calugaru, E.; Burman, C.; Li, J.; Ma, C.-M.
2007-06-01
Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.
NASA Astrophysics Data System (ADS)
Foster, Ian
2001-08-01
The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.
Optimization of Monte Carlo Algorithms and Ray Tracing on GPUs
NASA Astrophysics Data System (ADS)
Bergmann, Ryan M.; Vujić, Jasmina L.
2014-06-01
To take advantage of the computational power of GPUs, algorithms that work well on CPUs must be modified to conform to the GPU execution model. In this study, typical task-parallel Monte Carlo algorithms have been reformulated in a data-parallel way, and the benefits of doing so are examined. In-progress 3D ray tracing work is also touched upon as a milestone in developing a full-featured neutron transport code. Possible solutions to problems are examined.
Monte Carlo approach to nuclei and nuclear matter
Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco
2008-10-13
We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.
Applications of Monte Carlo simulations of gamma-ray spectra
Clark, D.D.
1995-12-31
A short, convenient computer program based on the Monte Carlo method that was developed to generate simulated gamma-ray spectra has been found to have useful applications in research and teaching. In research, we use it to predict spectra in neutron activation analysis (NAA), particularly in prompt gamma-ray NAA (PGNAA). In teaching, it is used to illustrate the dependence of detector response functions on the nature of gamma-ray interactions, the incident gamma-ray energy, and detector geometry.
Improved numerical techniques for processing Monte Carlo thermal scattering data
Schmidt, E; Rose, P
1980-01-01
As part of a Thermal Benchmark Validation Program sponsored by the Electric Power Research Institute (EPRI), the National Nuclear Data Center has been calculating thermal reactor lattices using the SAM-F Monte Carlo Computer Code. As part of this program a significant improvement has been made in the adequacy of the numerical procedures used to process the thermal differential scattering cross sections for hydrogen bound in H/sub 2/O.
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Engel, Georg P.; Schaefer, Stefan
2011-01-01
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Compressible generalized hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.
2014-05-01
One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.
Su, Lin; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond
2014-07-15
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER{sub RT} is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER{sub RT}. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER{sub RT} and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER{sub RT} agree well with DOSXYZnrc. For clinical cases, results from ARCHER{sub RT} are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to
Cell-veto Monte Carlo algorithm for long-range systems
NASA Astrophysics Data System (ADS)
Kapfer, Sebastian C.; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
Cell-veto Monte Carlo algorithm for long-range systems.
Kapfer, Sebastian C; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
Algorithmic differentiation and the calculation of forces by quantum Monte Carlo.
Sorella, Sandro; Capriotti, Luca
2010-12-21
We describe an efficient algorithm to compute forces in quantum Monte Carlo using adjoint algorithmic differentiation. This allows us to apply the space warp coordinate transformation in differential form, and compute all the 3M force components of a system with M atoms with a computational effort comparable with the one to obtain the total energy. Few examples illustrating the method for an electronic system containing several water molecules are presented. With the present technique, the calculation of finite-temperature thermodynamic properties of materials with quantum Monte Carlo will be feasible in the near future.
Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.
Johannesson, G; Chow, F K; Glascoe, L; Glaser, R E; Hanley, W G; Kosovic, B; Krnjajic, M; Larsen, S C; Lundquist, J K; Mirin, A A; Nitao, J J; Sugiyama, G A
2005-11-16
Atmospheric releases of hazardous materials are highly effective means to impact large populations. We propose an atmospheric event reconstruction framework that couples observed data and predictive computer-intensive dispersion models via Bayesian methodology. Due to the complexity of the model framework, a sampling-based approach is taken for posterior inference that combines Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) strategies.
NASA Technical Reports Server (NTRS)
Everson, John; Nelson, H. F.
1993-01-01
A reverse Monte Carlo radiative transfer code to predict rocket plume base heating is presented. In this technique rays representing the radiation propagation are traced backwards in time from the receiving surface to the point of emission in the plume. This increases the computational efficiency relative to the forward Monte Carlo technique when calculating the radiation reaching a specific point, as only the rays that strike the receiving point are considered.
Theory and Applications of Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Deible, Michael John
With the development of peta-scale computers and exa-scale only a few years away, the quantum Monte Carlo (QMC) method, with favorable scaling and inherent parrallelizability, is poised to increase its impact on the electronic structure community. The most widely used variation of QMC is the diffusion Monte Carlo (DMC) method. The accuracy of the DMC method is only limited by the trial wave function that it employs. The effect of the trial wave function is studied here by initially developing correlation-consistent Gaussian basis sets for use in DMC calculations. These basis sets give a low variance in variance Monte Carlo calculations and improved convergence in DMC. The orbital type used in the trial wave function is then investigated, and it is shown that Brueckner orbitals result in a DMC energy comparable to a DMC energy with orbitals from density functional theory and significantly lower than orbitals from Hartree-Fock theory. Three large weakly interacting systems are then studied; a water-16 isomer, a methane clathrate, and a carbon dioxide clathrate. The DMC method is seen to be in good agreement with MP2 calculations and provides reliable benchmarks. Several strongly correlated systems are then studied. An H4 model system that allows for a fine tuning of the multi-configurational character of the wave function shows when the accuracy of the DMC method with a single Slater-determinant trial function begins to deviate from multi-reference benchmarks. The weakly interacting face-to-face ethylene dimer is studied with and without a rotation around the pi bond, which is used to increase the multi-configurational nature of the wave function. This test shows that the effect of a multi-configurational wave function in weakly interacting systems causes DMC with a single Slater-determinant to be unable to achieve sub-chemical accuracy. The beryllium dimer is studied, and it is shown that a very large determinant expansion is required for DMC to predict a binding
Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations
Soran, P.D.; McKeon, D.C.; Booth, T.E.; Schlumberger Well Services, Houston, TX; Los Alamos National Lab., NM )
1989-07-01
Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab.
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
Monte Carlo scatter correction for SPECT
NASA Astrophysics Data System (ADS)
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
Parton shower Monte Carlo event generators
NASA Astrophysics Data System (ADS)
Webber, Bryan
2011-12-01
A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.
Parallelized quantum Monte Carlo algorithm with nonlocal worm updates.
Masaki-Kato, Akiko; Suzuki, Takafumi; Harada, Kenji; Todo, Synge; Kawashima, Naoki
2014-04-11
Based on the worm algorithm in the path-integral representation, we propose a general quantum Monte Carlo algorithm suitable for parallelizing on a distributed-memory computer by domain decomposition. Of particular importance is its application to large lattice systems of bosons and spins. A large number of worms are introduced and its population is controlled by a fictitious transverse field. For a benchmark, we study the size dependence of the Bose-condensation order parameter of the hard-core Bose-Hubbard model with L×L×βt=10240×10240×16, using 3200 computing cores, which shows good parallelization efficiency.
Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics
Kiedrowski, Brian C.; Brown, Forrest B.
2012-06-18
An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
A continuation multilevel Monte Carlo algorithm
Collier, Nathan; Haji-Ali, Abdul-Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raúl
2014-09-05
Here, we propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. Moreover, the actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Our numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.
2012-09-05
spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter
Monte Carlo applications at Hanford Engineering Development Laboratory
Carter, L.L.; Morford, R.J.; Wilcox, A.D.
1980-03-01
Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.
Estimation of beryllium ground state energy by Monte Carlo simulation
Kabir, K. M. Ariful; Halder, Amal
2015-05-15
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Design of composite laminates by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Fang, Chin; Springer, George S.
1993-01-01
A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.
Monte Carlo Methods for Bridging the Timescale Gap
NASA Astrophysics Data System (ADS)
Wilding, Nigel; Landau, David P.
We identify the origin, and elucidate the character of the extended time-scales that plague computer simulation studies of first and second order phase transitions. A brief survey is provided of a number of new and existing techniques that attempt to circumvent these problems. Attention is then focused on two novel methods with which we have particular experience: “Wang-Landau sampling” and Phase Switch Monte Carlo. Detailed case studies are made of the application of the Wang-Landau approach to calculate the density of states of the 2D Ising model and the Edwards-Anderson spin glass. The principles and operation of Phase Switch Monte Carlo are described and its utility in tackling ‘difficult’ first order phase transitions is illustrated via a case study of hard-sphere freezing. We conclude with a brief overview of promising new methods for the improvement of deterministic, spin dynamics simulations.
Monte Carlo Study of Real Time Dynamics on the Lattice
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.
2016-08-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Distributional monte carlo methods for the boltzmann equation
NASA Astrophysics Data System (ADS)
Schrock, Christopher R.
Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.
Burrows, John
2013-04-01
An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.
Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
Kirk, B.L.; West, J.T.
1984-06-01
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula
Hwang, M.; Bae, S.; Chung, B. D.
2012-07-01
An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)
Composite sequential Monte Carlo test for post-market vaccine safety surveillance.
Silva, Ivair R
2016-04-30
Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data.
PEREGRINE: Bringing Monte Carlo based treatment planning calculations to today's clinic
Patterson, R; Daly, T; Garrett, D; Hartmann-Siantar, C; House, R; May, S
1999-12-13
Monte Carlo simulation of radiotherapy is now available for routine clinical use. It brings improved accuracy of dose calculations for treatments where important physics comes into play, and provides a robust, general tool for planning where empirical solutions have not been implemented. Through the use of Monte Carlo, new information, including the effects of the composition of materials in the patient, the effects of electron transport, and the details of the distribution of energy deposition, can be applied to the field. PEREGRINE{trademark} is a Monte Carlo dose calculation solution that was designed and built specifically for the purpose of providing a practical, affordable Monte Carlo capability to the clinic. The system solution was crafted to facilitate insertion of this powerful tool into day-to-day treatment planning, while being extensible to accommodate improvements in techniques, computers, and interfaces.
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.
Markov Chain Monte Carlo and Irreversibility
NASA Astrophysics Data System (ADS)
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Classical Trajectory and Monte Carlo Techniques
NASA Astrophysics Data System (ADS)
Olson, Ronald
The classical trajectory Monte Carlo (CTMC) method originated with Hirschfelder, who studied the H + D2 exchange reaction using a mechanical calculator [58.1]. With the availability of computers, the CTMC method was actively applied to a large number of chemical systems to determine reaction rates, and final state vibrational and rotational populations (see, e.g., Karplus et al. [58.2]). For atomic physics problems, a major step was introduced by Abrines and Percival [58.3] who employed Kepler's equations and the Bohr-Sommerfield model for atomic hydrogen to investigate electron capture and ionization for intermediate velocity collisions of H+ + H. An excellent description is given by Percival and Richards [58.4]. The CTMC method has a wide range of applicability to strongly-coupled systems, such as collisions by multiply-charged ions [58.5]. In such systems, perturbation methods fail, and basis set limitations of coupled-channel molecular- and atomic-orbital techniques have difficulty in representing the multitude of activeexcitation, electron capture, and ionization channels. Vector- and parallel-processors now allow increasingly detailed study of the dynamics of the heavy projectile and target, along with the active electrons.
DPEMC: A Monte Carlo for double diffraction
NASA Astrophysics Data System (ADS)
Boonekamp, M.; Kúcs, T.
2005-05-01
We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur
Nature of time in Monte Carlo processes
NASA Astrophysics Data System (ADS)
Choi, M. Y.; Huberman, B. A.
1984-03-01
We show that the asymptotic behavior of Monte Carlo simulations of many-body systems is much more complex than that produced by continuous dynamics regardless of the updating process. Therefore the nature of time in Monte Carlo processes is discrete enough so as to produce dynamics which is different from that generated by the familiar master equation.
Monte Carlo Volcano Seismic Moment Tensors
NASA Astrophysics Data System (ADS)
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Dosimetry applications in GATE Monte Carlo toolkit.
Papadimitroulas, Panagiotis
2017-02-21
Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Ogata, Koji; Soejima, Kenji; Higo, Junichi
2006-10-01
We have developed a computational method of protein design to detect amino acid sequences that are adaptable to given main-chain coordinates of a protein. In this method, the selection of amino acid types employs a Metropolis Monte Carlo method with a scoring function in conjunction with the approximation of free energies computed from 3D structures. To compute the scoring function, a side-chain prediction using another Metropolis Monte Carlo method was performed to select structurally suitable side-chain conformations from a side-chain library. In total, two layers of Monte Carlo procedures were performed, first to select amino acid types (1st layer Monte Carlo) and then to predict side-chain conformations (2nd layers Monte Carlo). We applied this method to sequence design for the entire sequence on the SH3 domain, Protein G, and BPTI. The predicted sequences were similar to those of the wild-type proteins. We compared the results of the predictions with and without the 2nd layer Monte Carlo method. The results revealed that the two-layer Monte Carlo method produced better sequence similarity to the wild-type proteins than the one-layer method. Finally, we applied this method to neuraminidase of influenza virus. The results were consistent with the sequences identified from the isolated viruses.
Adaptive domain decomposition for Monte Carlo simulations on parallel processors
NASA Technical Reports Server (NTRS)
Wilmoth, Richard G.
1991-01-01
A method is described for performing direct simulation Monte Carlo (DSMC) calculations on parallel processors using adaptive domain decomposition to distribute the computational work load. The method has been implemented on a commercially available hypercube and benchmark results are presented which show the performance of the method relative to current supercomputers. The problems studied were simulations of equilibrium conditions in a closed, stationary box, a two-dimensional vortex flow, and the hypersonic, rarefied flow in a two-dimensional channel. For these problems, the parallel DSMC method ran 5 to 13 times faster than on a single processor of a Cray-2. The adaptive decomposition method worked well in uniformly distributing the computational work over an arbitrary number of processors and reduced the average computational time by over a factor of two in certain cases.
Adaptive domain decomposition for Monte Carlo simulations on parallel processors
NASA Technical Reports Server (NTRS)
Wilmoth, Richard G.
1990-01-01
A method is described for performing direct simulation Monte Carlo (DSMC) calculations on parallel processors using adaptive domain decomposition to distribute the computational work load. The method has been implemented on a commercially available hypercube and benchmark results are presented which show the performance of the method relative to current supercomputers. The problems studied were simulations of equilibrium conditions in a closed, stationary box, a two-dimensional vortex flow, and the hypersonic, rarefield flow in a two-dimensional channel. For these problems, the parallel DSMC method ran 5 to 13 times faster than on a single processor of a Cray-2. The adaptive decomposition method worked well in uniformly distributing the computational work over an arbitrary number of processors and reduced the average computational time by over a factor of two in certain cases.
Belle Monte-Carlo Production on the Amazon EC2 Cloud
NASA Astrophysics Data System (ADS)
Sevior, Martin; Fifield, Tom; Katayama, Nobu
The Belle II experiment which aims to increase the Luminosity of the KEKB collider by a factor of 50 will search for physics beyond the Standard Model through precision measurements and the investigation of rare processes in Flavour physics. The expected data rate is comparable to a current era LHC experiment with commensurate computing needs. Incorporating commercial cloud computing, such as that provided by the Amazon Elastic Compute Cloud (EC2) into the Belle II computing model may provide a lower Total Cost of Ownership for the Belle II computing solution. To investigate this possibility, we have created a system to conduct the complete Belle Monte Carlo simulation chain on EC2 to benchmark the cost and performance of the service. This paper will describe how this was achieved in addition to the drawbacks and costs of large-scale Monte Carlo production on EC2.
Belle monte-carlo production on the Amazon EC2 cloud
NASA Astrophysics Data System (ADS)
Sevior, Martin; Fifield, Tom; Katayama, Nobuhiko
2010-04-01
The Belle II experiment which aims to increase the Luminosity of the KEKB collider by a factor of 50 will search for physics beyond the Standard Model through precision measurements and the investigation of rare processes in Flavour physics. The expected data rate is comparable to a current era LHC experiment with commensurate computing needs. Incorporating commercial cloud computing, such as that provided by the Amazon Elastic Compute Cloud (EC2) into the Belle II computing model may provide a lower Total Cost of Ownership for the Belle II computing solution. To investigate this possibility, we have created a system to conduct the complete Belle Monte Carlo simulation chain on EC2 to benchmark the cost and performance of the service. This paper will describe how this was achieved in addition to the drawbacks and costs of large-scale Monte Carlo production on EC2.
Chemical accuracy from quantum Monte Carlo for the benzene dimer
Azadi, Sam; Cohen, R. E.
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
A Wigner Monte Carlo approach to density functional theory
Sellier, J.M. Dimov, I.
2014-08-01
In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.
Accelerating Monte Carlo power studies through parametric power estimation.
Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C
2016-04-01
Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.
Estimating return period of landslide triggering by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Peres, D. J.; Cancelliere, A.
2016-10-01
Assessment of landslide hazard is a crucial step for landslide mitigation planning. Estimation of the return period of slope instability represents a quantitative method to map landslide triggering hazard on a catchment. The most common approach to estimate return periods consists in coupling a triggering threshold equation, derived from an hydrological and slope stability process-based model, with a rainfall intensity-duration-frequency (IDF) curve. Such a traditional approach generally neglects the effect of rainfall intensity variability within events, as well as the variability of initial conditions, which depend on antecedent rainfall. We propose a Monte Carlo approach for estimating the return period of shallow landslide triggering which enables to account for both variabilities. Synthetic hourly rainfall-landslide data generated by Monte Carlo simulations are analysed to compute return periods as the mean interarrival time of a factor of safety less than one. Applications are first conducted to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to evaluate the traditional IDF-based method by comparison with the Monte Carlo one. Results show that return period is affected significantly by variability of both rainfall intensity within events and of initial conditions, and that the traditional IDF-based approach may lead to an overestimation of the return period of landslide triggering, or, in other words, a non-conservative assessment of landslide hazard.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Chemical accuracy from quantum Monte Carlo for the benzene dimer.
Azadi, Sam; Cohen, R E
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Optimization of quantum Monte Carlo wave functions by energy minimization.
Toulouse, Julien; Umrigar, C J
2007-02-28
We study three wave function optimization methods based on energy minimization in a variational Monte Carlo framework: the Newton, linear, and perturbative methods. In the Newton method, the parameter variations are calculated from the energy gradient and Hessian, using a reduced variance statistical estimator for the latter. In the linear method, the parameter variations are found by diagonalizing a nonsymmetric estimator of the Hamiltonian matrix in the space spanned by the wave function and its derivatives with respect to the parameters, making use of a strong zero-variance principle. In the less computationally expensive perturbative method, the parameter variations are calculated by approximately solving the generalized eigenvalue equation of the linear method by a nonorthogonal perturbation theory. These general methods are illustrated here by the optimization of wave functions consisting of a Jastrow factor multiplied by an expansion in configuration state functions (CSFs) for the C2 molecule, including both valence and core electrons in the calculation. The Newton and linear methods are very efficient for the optimization of the Jastrow, CSF, and orbital parameters. The perturbative method is a good alternative for the optimization of just the CSF and orbital parameters. Although the optimization is performed at the variational Monte Carlo level, we observe for the C2 molecule studied here, and for other systems we have studied, that as more parameters in the trial wave functions are optimized, the diffusion Monte Carlo total energy improves monotonically, implying that the nodal hypersurface also improves monotonically.
Optimization of quantum Monte Carlo wave functions by energy minimization
NASA Astrophysics Data System (ADS)
Toulouse, Julien; Umrigar, C. J.
2007-02-01
We study three wave function optimization methods based on energy minimization in a variational Monte Carlo framework: the Newton, linear, and perturbative methods. In the Newton method, the parameter variations are calculated from the energy gradient and Hessian, using a reduced variance statistical estimator for the latter. In the linear method, the parameter variations are found by diagonalizing a nonsymmetric estimator of the Hamiltonian matrix in the space spanned by the wave function and its derivatives with respect to the parameters, making use of a strong zero-variance principle. In the less computationally expensive perturbative method, the parameter variations are calculated by approximately solving the generalized eigenvalue equation of the linear method by a nonorthogonal perturbation theory. These general methods are illustrated here by the optimization of wave functions consisting of a Jastrow factor multiplied by an expansion in configuration state functions (CSFs) for the C2 molecule, including both valence and core electrons in the calculation. The Newton and linear methods are very efficient for the optimization of the Jastrow, CSF, and orbital parameters. The perturbative method is a good alternative for the optimization of just the CSF and orbital parameters. Although the optimization is performed at the variational Monte Carlo level, we observe for the C2 molecule studied here, and for other systems we have studied, that as more parameters in the trial wave functions are optimized, the diffusion Monte Carlo total energy improves monotonically, implying that the nodal hypersurface also improves monotonically.
MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS
NASA Technical Reports Server (NTRS)
Glass, A. B.
1994-01-01
Monte Carlo analysis. Midcourse maneuvers may be made to correct for burn errors and comet movements. The MONITOR program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 255K of 8 bit bytes. The MONITOR program was developed in 1980.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
Fourier Monte Carlo renormalization-group approach to crystalline membranes.
Tröster, A
2015-02-01
The computation of the critical exponent η characterizing the universal elastic behavior of crystalline membranes in the flat phase continues to represent challenges to theorists as well as computer simulators that manifest themselves in a considerable spread of numerical results for η published in the literature. We present additional insight into this problem that results from combining Wilson's momentum shell renormalization-group method with the power of modern computer simulations based on the Fourier Monte Carlo algorithm. After discussing the ideas and difficulties underlying this combined scheme, we present a calculation of the renormalization-group flow of the effective two-dimensional Young modulus for momentum shells of different thickness. Extrapolation to infinite shell thickness allows us to produce results in reasonable agreement with those obtained by functional renormalization group or by Fourier Monte Carlo simulations in combination with finite-size scaling. Moreover, our method allows us to obtain a decent estimate for the value of the Wegner exponent ω that determines the leading correction to scaling, which in turn allows us to refine our numerical estimate for η previously obtained from precise finite-size scaling data.
Practical Schemes for Accurate Forces in Quantum Monte Carlo.
Moroni, S; Saccani, S; Filippi, C
2014-11-11
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of theory. Algorithms to compute exact DMC forces have been proposed in the past, and one such scheme is also put forward in this work, but remain rather impractical due to their high computational cost. As a practical route to DMC forces, we therefore revisit here an approximate method, originally developed in the context of correlated sampling and named here the Variational Drift-Diffusion (VD) approach. We thoroughly investigate its accuracy by checking the consistency between the approximate VD force and the derivative of the DMC potential energy surface for the SiH and C2 molecules and employ a wide range of wave functions optimized in VMC to assess its robustness against the choice of trial function. We find that, for all but the poorest wave function, the discrepancy between force and energy is very small over all interatomic distances, affecting the equilibrium bond length obtained with the VD forces by less than 0.004 au. Furthermore, when the VMC forces are approximate due to the use of a partially optimized wave function, the DMC forces have smaller errors and always lead to an equilibrium distance in better agreement with the experimental value. We also show that the cost of computing the VD forces is only slightly larger than the cost of calculating the DMC energy. Therefore, the VD approximation represents a robust and efficient approach to compute accurate DMC forces, superior to the VMC counterparts.
Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
Analysis of real-time networks with monte carlo methods
NASA Astrophysics Data System (ADS)
Mauclair, C.; Durrieu, G.
2013-12-01
Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.
MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT
J. S. HENDRICK; G. W. MCKINNEY; L. J. COX
2000-01-01
The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.
Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark
2012-02-01
In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Neutron matter at zero temperature with an auxiliary field diffusion Monte Carlo method
NASA Astrophysics Data System (ADS)
Sarsa, A.; Fantoni, S.; Schmidt, K. E.; Pederiva, F.
2003-08-01
The recently developed auxiliary field diffusion Monte Carlo method is applied to compute the equation of state and the compressibility of neutron matter. By combining diffusion Monte Carlo method for the spatial degrees of freedom and auxiliary field Monte Carlo method to separate the spin-isospin operators, quantum Monte Carlo can be used to simulate the ground state of many-nucleon systems (A≲100). We use a path constraint to control the fermion sign problem. We have made simulations for realistic interactions, which include tensor and spin-orbit two-body potentials as well as three-nucleon forces. The Argonne v'8 and v'6 two-nucleon potentials plus the Urbana or Illinois three-nucleon potentials have been used in our calculations. We compare with fermion hypernetted chain results. We report on the results of a periodic box fermi hypernetted chain calculation, which is also used to estimate the finite size corrections to our quantum Monte Carlo simulations. Our auxiliary field diffusion Monte Carlo (AFDMC) results for v6 models of pure neutron matter are in reasonably good agreement with equivalent correlated basis function (CBF) calculations, providing energies per particle which are slightly lower than the CBF ones. However, the inclusion of the spin-orbit force leads to quite different results particularly at relatively high densities. The resulting equation of state from AFDMC calculations is harder than the one from previous Fermi hypernetted chain studies commonly used to determine the neutron star structure.
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the
Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C
2010-12-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.
First-Order or Second-Order Kinetics? A Monte Carlo Answer
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…
Monte Carlo molecular simulation predictions for the heat of vaporization of acetone and butyramide.
Biddy, Mary J.; Martin, Marcus Gary
2005-03-01
Vapor pressure and heats of vaporization are computed for the industrial fluid properties simulation challenge (IFPSC) data set using the Towhee Monte Carlo molecular simulation program. Results are presented for the CHARMM27 and OPLS-aa force fields. Once again, the average result using multiple force fields is a better predictor of the experimental value than either individual force field.
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
NASA Astrophysics Data System (ADS)
Frisson, T.; Zahra, N.; Lautesse, P.; Sarrut, D.
2009-07-01
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Monte Carlo direct view factor and generalized radiative heat transfer programs
NASA Technical Reports Server (NTRS)
Mc Williams, J. L.; Scates, J. H.
1969-01-01
Computer programs find the direct view factor from one surface segment to another using the Monte carlo technique, and the radioactive-transfer coefficients between surface segments. An advantage of the programs is the great generality of problems treatable and rapidity of solution from problem conception to receipt of results.
Overview and applications of the Monte Carlo radiation transport kit at LLNL
Sale, K E
1999-06-23
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Naglič, Peter; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran
2017-01-01
Analytical expressions for sampling the scattering angle from a phase function in Monte Carlo simulations of light propagation are available only for a limited number of phase functions. Consequently, numerical sampling methods based on tabulated values are often required instead. By using Monte Carlo simulated reflectance, we compare two existing and propose an improved numerical sampling method and show that both the number of the tabulated values and the numerical sampling method significantly influence the accuracy of the simulated reflectance. The provided results and guidelines should serve as a good starting point for conducting computationally efficient Monte Carlo simulations with numerical phase function sampling. PMID:28663872
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Properties of reactive oxygen species by quantum Monte Carlo
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Properties of reactive oxygen species by quantum Monte Carlo.
Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo
2014-07-07
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Properties of reactive oxygen species by quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo
2014-07-01
The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.
Monte Carlo implementation of polarized hadronization
NASA Astrophysics Data System (ADS)
Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.
2017-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.
Monte-Carlo simulation of Callisto's exosphere
NASA Astrophysics Data System (ADS)
Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.
2015-12-01
We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
Extra Chance Generalized Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; Sanz-Serna, J. M.
2015-01-01
We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
Basics of Monte-Carlo Simulation: Focusing on Dose-to-medium and Dose-to-water.
Tadano, Kiichi; Isobe, Tomonori; Sato, Eisuke; Takei, Hideyuki; Kobayashi, Daisuke; Mori, Yutaro; Tomita, Tetsuya; Sakae, Takeji
Treatment planning systems with highly accurate dose calculation algorithms such as Monte-Carlo method and linear Boltzmann transport equation are becoming popular thanks to a development of the computer technology. These algorithms use new concepts, dose-to-medium and dose-to-water. However, introducing these concepts can cause confusion in clinical sites. Basic knowledges about Monte-Carlo simulation and other corresponding algorithms were explained in this article such as the principles, the parameters and words of caution.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy
Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W
2002-02-19
Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.
Quantum Monte Carlo Calculations of Transition Metal Oxides
NASA Astrophysics Data System (ADS)
Wagner, Lucas
2006-03-01
Quantum Monte Carlo is a powerful computational tool to study correlated systems, allowing us to explicitly treat many-body interactions with favorable scaling in the number of particles. It has been regarded as a benchmark tool for first and second row condensed matter systems, although its accuracy has not been thoroughly investigated in strongly correlated transition metal oxides. QMC has also historically suffered from the mixed estimator error in operators that do not commute with the Hamiltonian and from stochastic uncertainty, which make small energy differences unattainable. Using the Reptation Monte Carlo algorithm of Moroni and Baroni(along with contributions from others), we have developed a QMC framework that makes these previously unavailable quantities computationally feasible for systems of hundreds of electrons in a controlled and consistent way, and apply this framework to transition metal oxides. We compare these results with traditional mean-field results like the LDA and with experiment where available, focusing in particular on the polarization and lattice constants in a few interesting ferroelectric materials. This work was performed in collaboration with Lubos Mitas and Jeffrey Grossman.
Testing random number generators for Monte Carlo applications.
Sim, L H; Nitschke, K N
1993-03-01
Central to any system for modelling radiation transport phenomena using Monte Carlo techniques is the method by which pseudo random numbers are generated. This method is commonly referred to as the Random Number Generator (RNG). It is usually a computer implemented mathematical algorithm which produces a series of numbers uniformly distributed on the interval [0,1). If this series satisfies certain statistical tests for randomness, then for practical purposes the pseudo random numbers in the series can be considered to be random. Tests of this nature are important not only for new RNGs but also to test the implementation of known RNG algorithms in different computer environments. Six RNGs have been tested using six statistical tests and one visual test. The statistical tests are the moments, frequency (digit and number), serial, gap, and poker tests. The visual test is a simple two dimensional ordered pair display. In addition the RNGs have been tested in a specific Monte Carlo application. This type of test is often overlooked, however it is important that in addition to satisfactory performance in statistical tests, the RNG be able to perform effectively in the applications of interest. The RNGs tested here are based on a variety of algorithms, including multiplicative and linear congruential, lagged Fibonacci, and combination arithmetic and lagged Fibonacci. The effect of the Bays-Durham shuffling algorithm on the output of a known "bad" RNG has also been investigated.
Monte Carlo dose verification for intensity-modulated arc therapy
NASA Astrophysics Data System (ADS)
Li, X. Allen; Ma, Lijun; Naqvi, Shahid; Shih, Rompin; Yu, Cedric
2001-09-01
Intensity-modulated arc therapy (IMAT), a technique which combines beam rotation and dynamic multileaf collimation, has been implemented in our clinic. Dosimetric errors can be created by the inability of the planning system to accurately account for the effects of tissue inhomogeneities and physical characteristics of the multileaf collimator (MLC). The objective of this study is to explore the use of Monte Carlo (MC) simulation for IMAT dose verification. The BEAM/DOSXYZ Monte Carlo system was implemented to perform dose verification for the IMAT treatment. The implementation includes the simulation of the linac head/MLC (Elekta SL20), the conversion of patient CT images and beam arrangement for 3D dose calculation, the calculation of gantry rotation and leaf motion by a series of static beams and the development of software to automate the entire MC process. The MC calculations were verified by measurements for conventional beam settings. The agreement was within 2%. The IMAT dose distributions generated by a commercial forward planning system (RenderPlan, Elekta) were compared with those calculated by the MC package. For the cases studied, discrepancies of over 10% were found between the MC and the RenderPlan dose calculations. These discrepancies were due in part to the inaccurate dose calculation of the RenderPlan system. The computation time for the IMAT MC calculation was in the range of 20-80 min on 15 Pentium-III computers. The MC method was also useful in verifying the beam apertures used in the IMAT treatments.
Monte Carlo simulations and dosimetric studies of an irradiation facility
NASA Astrophysics Data System (ADS)
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
function. Key Words and Phrases: Parametric estimation , exponential families, nonlinear models, nonlinear least squares, neural networks, Monte Carlo simulation, computer intensive statistical methods.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Proposal for grid computing for nuclear applications
NASA Astrophysics Data System (ADS)
Idris, Faridah Mohamad; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; Ali, Mohd Adli bin Md; Mohamed, Abdul Aziz; Ismail, Roslan; Ahmad, Abdul Rahim; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat @; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Sjaugi, Farhan
2014-02-01
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Proposal for grid computing for nuclear applications
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
MDTS: automatic complex materials design using Monte Carlo tree search.
M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji
2017-01-01
Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Monte Carlo Modeling of High-Energy Film Radiography
Miller, A.C., Jr.; Cochran, J.L.; Lamberti, V.E.
2003-03-28
High-energy film radiography methods, adapted in the past to performing specific tasks, must now meet increasing demands to identify defects and perform critical measurements in a wide variety of manufacturing processes. Although film provides unequaled resolution for most components and assemblies, image quality must be enhanced with much more detailed information to identify problems and qualify features of interest inside manufactured items. The work described is concerned with improving current 9 MeV nondestructive practice by optimizing the important parameters involved in film radiography using computational methods. In order to follow important scattering effects produced by electrons, the Monte Carlo N-Particle (MCNP) transport code was used with advanced, highly parallel computer systems. The work has provided a more detailed understanding of latent image formation at high X-ray energies, and suggests that improvements can be made in our ability to identify defects and to obtain much more detail in images of fine features.
Monte Carlo calculations of (e,e{prime}p) reactions
Pieper, S.C.; Pandharipande, V.R.; Boffi, S.; Radici, M.
1995-08-01
We have used our {sup 16}O Monte Carlo program to compute the p{sub 3/2} quasihole wave function in {sup 16}O and the Pavia program to compute {sup 16}O(e,e{prime}p) {sup 15}N(3/2{sup -}) with this wave function. We also developed a local-density approximation (LDA) for obtaining the quasihole wave function from a mean-field wave function, and studied the effects of using this LDA on the outgoing distorted waves. We find that we can predict correctly the contribution of the interior of the nucleus to the observed (e,e{prime}p) cross sections, but the surface contribution is too large. The LDA modifications to the outgoing wave function are small.
The Monte Carlo calculation of integral radiation dose in xeromammography.
Dance, D R
1980-01-01
A Monte Carlo computer program has been developed for the computation of integral radiation dose to the breast in xeromammography. The results are given in terms of the integral dose per unit area of the breast per unit incident exposure. The calculations have been made for monoenergetic incident photons and the results integrated over a variety of X-ray spectra from both tungsten and molybdenum targets. This range incorporates qualities used in conventional and xeromammography. The program includes the selenium plate used in xeroradiography; the energy absorbed in this detector has also been investigated. The latter calculations have been used to predict relative values of exposure and of integral dose to the breast for xeromammograms taken at various radiation qualities. The results have been applied to recent work on the reduction of patient exposure in xeromammography by the addition of aluminium filters to the X-ray beam.
Combining four Monte Carlo estimators for radiation momentum deposition
Urbatsch, Todd J; Hykes, Joshua M
2010-11-18
Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the gain in FOM is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10-20% greater than any of the solo estimators FOM. In addition, the numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions.
Kinetic Monte Carlo simulation of the classical nucleation process
NASA Astrophysics Data System (ADS)
Filipponi, A.; Giammatteo, P.
2016-12-01
We implemented a kinetic Monte Carlo computer simulation of the nucleation process in the framework of the coarse grained scenario of the Classical Nucleation Theory (CNT). The computational approach is efficient for a wide range of temperatures and sample sizes and provides a reliable simulation of the stochastic process. The results for the nucleation rate are in agreement with the CNT predictions based on the stationary solution of the set of differential equations for the continuous variables representing the average population distribution of nuclei size. Time dependent nucleation behavior can also be simulated with results in agreement with previous approaches. The method, here established for the case in which the excess free-energy of a crystalline nucleus is a smooth-function of the size, can be particularly useful when more complex descriptions are required.
Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm
NASA Astrophysics Data System (ADS)
Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David
2015-04-01
The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.
Kinetic Monte Carlo Simulation of Oxygen Diffusion in Ytterbium Disilicate
NASA Astrophysics Data System (ADS)
Good, Brian
2015-03-01
Ytterbium disilicate is of interest as a potential environmental barrier coating for aerospace applications, notably for use in next generation jet turbine engines. In such applications, the diffusion of oxygen and water vapor through these coatings is undesirable if high temperature corrosion is to be avoided. In an effort to understand the diffusion process in these materials, we have performed kinetic Monte Carlo simulations of vacancy-mediated oxygen diffusion in Ytterbium Disilicate. Oxygen vacancy site energies and diffusion barrier energies are computed using Density Functional Theory. We find that many potential diffusion paths involve large barrier energies, but some paths have barrier energies smaller than one electron volt. However, computed vacancy formation energies suggest that the intrinsic vacancy concentration is small in the pure material, with the result that the material is unlikely to exhibit significant oxygen permeability.
Monte Carlo simulation of laser backscatter from sea water
NASA Astrophysics Data System (ADS)
Koerber, B. W.; Phillips, D. M.
1982-01-01
A Monte Carlo simulation study of laser backscatter from sea water has been carried out to provide data required to assess the feasibility of measuring inherent optical propagation properties of sea water from an aircraft. The possibility was examined of deriving such information from the backscatter component of the return signals measured by the WRELADS laser airborne depth sounder system. Computations were made for various water turbidity conditions and for different fields of view of the WRELADS receiver. Using a simple model fitted to the computed backscatter data, it was shown that values of the scattering data absorption coefficients can be derived from the initial amplitude and the decay rate of the backscatter envelope.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Kinetic lattice Monte Carlo simulation of viscoelastic subdiffusion.
Fritsch, Christian C; Langowski, Jörg
2012-08-14
We propose a kinetic Monte Carlo method for the simulation of subdiffusive random walks on a cartesian lattice. The random walkers are subject to viscoelastic forces which we compute from their individual trajectories via the fractional Langevin equation. At every step the walkers move by one lattice unit, which makes them differ essentially from continuous time random walks, where the subdiffusive behavior is induced by random waiting. To enable computationally inexpensive simulations with n-step memories, we use an approximation of the memory and the memory kernel functions with a complexity O(log n). Eventual discretization and approximation artifacts are compensated with numerical adjustments of the memory kernel functions. We verify with a number of analyses that this new method provides binary fractional random walks that are fully consistent with the theory of fractional brownian motion.
Independent pixel and Monte Carlo estimates of stratocumulus albedo
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN
1994-01-01
Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
A vectorized Monte Carlo code for modeling photon transport in SPECT
Smith, M.F. ); Floyd, C.E. Jr.; Jaszczak, R.J. Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 )
1993-07-01
A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
NASA Astrophysics Data System (ADS)
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
Self-learning quantum Monte Carlo method in interacting fermion systems
NASA Astrophysics Data System (ADS)
Xu, Xiao Yan; Qi, Yang; Liu, Junwei; Fu, Liang; Meng, Zi Yang
2017-07-01
The self-learning Monte Carlo method is a powerful general-purpose numerical method recently introduced to simulate many-body systems. In this work, we extend it to an interacting fermion quantum system in the framework of the widely used determinant quantum Monte Carlo. This method can generally reduce the computational complexity and moreover can greatly suppress the autocorrelation time near a critical point. This enables us to simulate an interacting fermion system on a 100 ×100 lattice even at the critical point and obtain critical exponents with high precision.
Improved short adjacent repeat identification using three evolutionary Monte Carlo schemes.
Xu, Jin; Li, Qiwei; Li, Victor O K; Li, Shuo-Yen Robert; Fan, Xiaodan
2013-01-01
This paper employs three Evolutionary Monte Carlo (EMC) schemes to solve the Short Adjacent Repeat Identification Problem (SARIP), which aims to identify the common repeat units shared by multiple sequences. The three EMC schemes, i.e., Random Exchange (RE), Best Exchange (BE), and crossover are implemented on a parallel platform. The simulation results show that compared with the conventional Markov Chain Monte Carlo (MCMC) algorithm, all three EMC schemes can not only shorten the computation time via speeding up the convergence but also improve the solution quality in difficult cases. Moreover, we observe that the performances of different EMC schemes depend on the degeneracy degree of the motif pattern.
Energy-Driven Kinetic Monte Carlo Method and Its Application in Fullerene Coalescence.
Ding, Feng; Yakobson, Boris I
2014-09-04
Mimicking the conventional barrier-based kinetic Monte Carlo simulation, an energy-driven kinetic Monte Carlo (EDKMC) method was developed to study the structural transformation of carbon nanomaterials. The new method is many orders magnitude faster than standard molecular dynamics or Monte Marlo (MC) simulations and thus allows us to explore rare events within a reasonable computational time. As an example, the temperature dependence of fullerene coalescence was studied. The simulation, for the first time, revealed that short capped single-walled carbon nanotubes (SWNTs) appear as low-energy metastable structures during the structural evolution.
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; Dong, Qiaoyuan; Kleinhenz, Joseph; Cohen, Guy; Gull, Emanuel
2017-02-01
We generalize the recently developed inchworm quantum Monte Carlo method to the full Keldysh contour with forward, backward, and equilibrium branches to describe the dynamics of strongly correlated impurity problems with time-dependent parameters. We introduce a method to compute Green's functions, spectral functions, and currents for inchworm Monte Carlo and show how systematic error assessments in real time can be obtained. We then illustrate the capabilities of the algorithm with a study of the behavior of quantum impurities after an instantaneous voltage quench from a thermal equilibrium state.
Estimating the parameters of dynamical systems from Big Data using Sequential Monte Carlo samplers
NASA Astrophysics Data System (ADS)
Green, P. L.; Maskell, S.
2017-09-01
In this paper the authors present a method which facilitates computationally efficient parameter estimation of dynamical systems from a continuously growing set of measurement data. It is shown that the proposed method, which utilises Sequential Monte Carlo samplers, is guaranteed to be fully parallelisable (in contrast to Markov chain Monte Carlo methods) and can be applied to a wide variety of scenarios within structural dynamics. Its ability to allow convergence of one's parameter estimates, as more data is analysed, sets it apart from other sequential methods (such as the particle filter).