Science.gov

Sample records for carlo computer codes-mcb

  1. Burnup calculations for KIPT accelerator driven subcritical facility using Monte Carlo computer codes-MCB and MCNPX.

    SciTech Connect

    Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division

    2009-06-09

    Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the

  2. Monte Carlo dose computation for IMRT optimization*

    NASA Astrophysics Data System (ADS)

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  3. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  4. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  5. A Monte Carlo photocurrent/photoemission computer program

    NASA Technical Reports Server (NTRS)

    Chadsey, W. L.; Ragona, C.

    1972-01-01

    A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.

  6. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  7. Monte Carlo Computer Simulation of a Rainbow.

    ERIC Educational Resources Information Center

    Olson, Donald; And Others

    1990-01-01

    Discusses making a computer-simulated rainbow using principles of physics, such as reflection and refraction. Provides BASIC program for the simulation. Appends a program illustrating the effects of dispersion of the colors. (YP)

  8. Computing Entanglement Entropy in Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Melko, Roger

    2012-02-01

    The scaling of entanglement entropy in quantum many-body wavefunctions is expected to be a fruitful resource for studying quantum phases and phase transitions in condensed matter. However, until the recent development of estimators for Renyi entropy in quantum Monte Carlo (QMC), we have been in the dark about the behaviour of entanglement in all but the simplest two-dimensional models. In this talk, I will outline the measurement techniques that allow access to the Renyi entropies in several different QMC methodologies. I will then discuss recent simulation results demonstrating the richness of entanglement scaling in 2D, including: the prevalence of the ``area law''; topological entanglement entropy in a gapped spin liquid; anomalous subleading logarithmic terms due to Goldstone modes; universal scaling at critical points; and examples of emergent conformal-like scaling in several gapless wavefunctions. Finally, I will explore the idea that ``long range entanglement'' may complement the notion of ``long range order'' for quantum phases and phase transitions which lack a conventional order parameter description.

  9. Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo

    SciTech Connect

    Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.

    2000-10-10

    Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.

  10. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  11. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  12. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  13. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.

    PubMed

    Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  14. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  15. Forward Monte Carlo Computations of Polarized Microwave Radiation

    NASA Technical Reports Server (NTRS)

    Battaglia, A.; Kummerow, C.

    2000-01-01

    Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.

  16. Image based Monte Carlo Modeling for Computational Phantom

    NASA Astrophysics Data System (ADS)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  17. Markov Chain Monte-Carlo Orbit Computation for Binary Asteroids

    NASA Astrophysics Data System (ADS)

    Oszkiewicz, D.; Hestroffer, D.; Pedro, David C.

    2013-11-01

    We present a novel method of orbit computation for resolved binary asteroids. The method combines the Thiele, Innes, van den Bos method with a Markov chain Monte Carlo technique (MCMC). The classical Thiele-van den Bos method has been commonly used in multiple applications before, including orbits of binary stars and asteroids; conversely this novel method can be used for the analysis of binary stars, and of other gravitationally bound binaries. The method requires a minimum of three observations (observing times and relative positions - Cartesian or polar) made at the same tangent plane - or close enough for enabling a first approximation. Further, the use of the MCMC technique for statistical inversion yields the whole bundle of possible orbits, including the one that is most probable. In this new method, we make use of the Metropolis-Hastings algorithm to sample the parameters of the Thiele-van den Bos method, that is the orbital period (or equivalently the double areal constant) together with three randomly selected observations from the same tangent plane. The observations are sampled within their observational errors (with an assumed distribution) and the orbital period is the only parameter that has to be tuned during the sampling procedure. We run multiple chains to ensure that the parameter phase space is well sampled and that the solutions have converged. After the sampling is completed we perform convergence diagnostics. The main advantage of the novel approach is that the orbital period does not need to be known in advance and the entire region of possible orbital solutions is sampled resulting in a maximum likelihood solution and the confidence regions. We have tested the new method on several known binary asteroids and conclude a good agreement with the results obtained with other methods. The new method has been implemented into the Gaia DPAC data reduction pipeline and can be used to confirm the binary nature of a suspected system, and for deriving

  18. Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.

    PubMed

    Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S

    2014-07-28

    In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface. PMID:25084954

  19. ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

    NASA Astrophysics Data System (ADS)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2014-06-01

    The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.

  20. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  1. BOMAB phantom manufacturing quality assurance study using Monte Carlo computations

    SciTech Connect

    Mallett, M.W.

    1994-01-01

    Monte Carlo calculations have been performed to assess the importance of and quantify quality assurance protocols in the manufacturing of the Bottle-Manikin-Absorption (BOMAB) phantom for calibrating in vivo measurement systems. The parameters characterizing the BOMAB phantom that were examined included height, fill volume, fill material density, wall thickness, and source concentration. Transport simulation was performed for monoenergetic photon sources of 0.200, 0.662, and 1,460 MeV. A linear response was observed in the photon current exiting the exterior surface of the BOMAB phantom due to variations in these parameters. Sensitivity studies were also performed for an in vivo system in operation at the Pacific Northwest Laboratories in Richland, WA. Variations in detector current for this in vivo system are reported for changes in the BOMAB phantom parameters studied here. Physical justifications for the observed results are also discussed.

  2. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    NASA Astrophysics Data System (ADS)

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

  3. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  4. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  5. An Overview of the NCC Spray/Monte-Carlo-PDF Computations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2000-01-01

    This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.

  6. The Development and Diagnostic Evaluation of the Monte Carlo Integration Computer as a Teaching Aid.

    ERIC Educational Resources Information Center

    Wood, Dean A.

    This document outlines the operation of the Monte Carlo Integration Computer (MCIC), which is capable of simulating several types of chemical processes. Some data obtained through the MCIC simulation of physical processes are presented in graphs. After giving reasons for not using the initially contemplated summative research procedures for…

  7. CloudMC: a cloud computing application for Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Miras, H.; Jiménez, R.; Miras, C.; Gomà, C.

    2013-04-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  8. Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Yeom, Yeon Soo; Jeong, Jong Hwi; Han, Min Cheol; Kim, Chan Hyeong

    2014-06-01

    Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively).

  9. Monte Carlo radiative heat transfer simulation on a reconfigurable computer

    SciTech Connect

    Gokhale, M.; Ahrens, C. M.; Frigo, J.; Minnich, R. G.; Tripp J. L.

    2004-01-01

    Recently, the appearance of very large (3-10M gate) FPGAs with embedded arithmetic units has opened the door to the possibility of floating point computation on these devices. While previous researchers have described peak performance or kernel matrix operations, there is as yet little experience with mapping an application-specific floating point pipeline onto FPGAs. In this work, we port a supercomputer application benchmark onto Xilinx Virtex II and II Pro FPGAs and compare performance with comparable microprocessor implementation. Our results show that this application-specific pipeline, with 12 multiply, 10 add/subtract, one divide, and two compare modules of single precision floating point data type, shows speedup of 1.6x-1.7x. We analyze the trade-offs between hardware and software 'sweet spots' to characterize the algorithms that will perform well on current and future FPGA architectures.

  10. Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems

    SciTech Connect

    Martin, W.R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  11. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

    NASA Astrophysics Data System (ADS)

    Martin, W. R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  12. A Monte Carlo calibration of a whole body counter using the ICRP computational phantoms.

    PubMed

    Nilsson, Jenny; Isaksson, Mats

    2015-03-01

    A fast and versatile calibration of a whole body counter (WBC) is presented. The WBC, consisting of four large plastic scintillators, is to be used for measurements after accident or other incident involving ionising radiation. The WBC was calibrated using Monte Carlo modelling and the ICRP computational phantoms. The Monte Carlo model of the WBC was made in GATE, v6.2 (Geant4 Application for Tomographic Emission) and MATLAB. The Monte Carlo model was verified by comparing simulated energy spectrum and simulated counting efficiency with experimental energy spectrum and experimental counting efficiency for high-energy monoenergetic gamma-emitting point sources. The simulated results were in good agreement with experimental results except when compared with experimental results from high dead-time (DT) measurements. The Monte Carlo calibration was made for a heterogeneous source distribution of (137)Cs and (40)K, respectively, inside the ICRP computational phantoms. The source distribution was based on the biokinetic model for (137)Cs. PMID:25147249

  13. Computing the principal eigenelements of some linear operators using a branching Monte Carlo method

    SciTech Connect

    Lejay, Antoine Maire, Sylvain

    2008-12-01

    In earlier work, we developed a Monte Carlo method to compute the principal eigenvalue of linear operators, which was based on the simulation of exit times. In this paper, we generalize this approach by showing how to use a branching method to improve the efficacy of simulating large exit times for the purpose of computing eigenvalues. Furthermore, we show that this new method provides a natural estimation of the first eigenfunction of the adjoint operator. Numerical examples of this method are given for the Laplace operator and an homogeneous neutron transport operator.

  14. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  15. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-11-15

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically.

  16. Monte Carlo simulations of converging laser beam propagating in turbid media with parallel computing

    NASA Astrophysics Data System (ADS)

    Wu, Di; Lu, Jun Q.; Hu, Xin H.; Zhao, S. S.

    1999-11-01

    Due to its flexibility and simplicity, Monte Carlo method is often used to study light propagation in turbid medium where the photons are treated like classic particles being scattered and absorbed randomly based on a radiative transfer theory. However, due to the need of large number of photons to produce statistically significance results, this type of calculations requires large computing resources. To overcome such difficulty, we implemented parallel computing technique into our Monte Carlo simulations. The algorithm is based on the fact that the classic particles are uncorrelated, and the trajectories of multiple photons can be tracked simultaneously. When a beam of focused light incident to the medium, the incident photons are divided into groups according to the available processes on a parallel machine and the calculations are carried out in parallel. Utilizing PVM (Parallel Virtual Machine, a parallel computing software), the parallel programs in both C and FORTRAN are developed on the massive parallel computer Cray T3E at the North Carolina Supercomputer Center and a local PC-cluster network running UNIX/Sun Solaris. The parallel performances of our codes have been excellent on both Cray T3E and the PC clusters. In this paper, we present results on a focusing laser beam propagating through a highly scattering and diluted solution of intralipid. The dependence of the spatial distribution of light near the focal point on the concentration of intralipid solution is studied and its significance is discussed.

  17. Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study

    SciTech Connect

    Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C.

    2011-02-15

    Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms

  18. COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.

    2015-11-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.

  19. Monte Carlo Computational Modeling of the Energy Dependence of Atomic Oxygen Undercutting of Protected Polymers

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo

    1998-01-01

    A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.

  20. Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao

    Monte Carlo simulation is a statistical sampling method used in studies of physical systems with properties that cannot be easily obtained analytically. The phase behavior of the Restricted Primitive Model of electrolyte solutions on the simple cubic lattice is studied using grand canonical Monte Carlo simulations and finite-size scaling techniques. The transition between disordered and ordered, NaCl-like structures is continuous, second-order at high temperatures and discrete, first-order at low temperatures. The line of continuous transitions meets the line of first-order transitions at a tricritical point. A new algorithm-Random Skipping Sequential (RSS) Monte Carl---is proposed, justified and shown analytically to have better mobility over the phase space than the conventional Metropolis algorithm satisfying strict detailed balance. The new algorithm employs sequential updating, and yields greatly enhanced sampling statistics than the Metropolis algorithm with random updating. A parallel version of Markov chain theory is introduced and applied in accelerating Monte Carlo simulation via cluster computing. It is shown that sequential updating is the key to reduce the inter-processor communication or synchronization which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time by the new method for systems of large and moderate sizes.

  1. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    NASA Astrophysics Data System (ADS)

    Ramos-Mendez, J. A.; Perl, J.; Faddegon, B.; Paganetti, H.

    2012-10-01

    In this work, the well accepted particle splitting technique has been adapted to proton therapy and implemented in a new Monte Carlo simulation tool (TOPAS) for modeling the gantry mounted treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH). Gains up to a factor of 14.5 in computational efficiency were reached with respect to a reference simulation in the generation of the phase space data in the cylindrically symmetric region of the nozzle. Comparisons between dose profiles in a water tank for several configurations show agreement between the simulations done with and without particle splitting within the statistical precision.

  2. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  3. Toward Real-Time Monte Carlo Simulation Using a Commercial Cloud Computing Infrastructure+

    PubMed Central

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. Methods We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the Message Passing Interface (MPI), and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. Results The output of the cloud-based MC simulation is identical to that produced by the single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 hour on a local computer can be executed in 3.3 minutes on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Conclusion Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. PMID:21841211

  4. Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review

    PubMed Central

    Paquet, Eric; Viktor, Herna L.

    2015-01-01

    Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262

  5. Molecular dynamics, monte carlo simulations, and langevin dynamics: a computational review.

    PubMed

    Paquet, Eric; Viktor, Herna L

    2015-01-01

    Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262

  6. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    SciTech Connect

    Zou Yu; Kavousanakis, Michail E.; Kevrekidis, Ioannis G.; Fox, Rodney O.

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.

  7. CASSIS — A new Monte-Carlo computer program for channeling simulation of RBS, NRA and PIXE

    NASA Astrophysics Data System (ADS)

    Kling, A.

    1995-08-01

    The basic concepts of the Monte Carlo computer simulation program CASSIS (Channeling Adapted Simulation of Swift Ions in Solids) for channeling phenomena are presented and discussed. In contrast to common computer codes CASSIS is able to perform calculations for high foreign atom concentrations, complex noncubic crystal lattices and PIXE-channeling. The feasibility of the program is demonstrated for different materials.

  8. Efficient Approximate Bayesian Computation Coupled With Markov Chain Monte Carlo Without Likelihood

    PubMed Central

    Wegmann, Daniel; Leuenberger, Christoph; Excoffier, Laurent

    2009-01-01

    Approximate Bayesian computation (ABC) techniques permit inferences in complex demographic models, but are computationally inefficient. A Markov chain Monte Carlo (MCMC) approach has been proposed (Marjoram et al. 2003), but it suffers from computational problems and poor mixing. We propose several methodological developments to overcome the shortcomings of this MCMC approach and hence realize substantial computational advances over standard ABC. The principal idea is to relax the tolerance within MCMC to permit good mixing, but retain a good approximation to the posterior by a combination of subsampling the output and regression adjustment. We also propose to use a partial least-squares (PLS) transformation to choose informative statistics. The accuracy of our approach is examined in the case of the divergence of two populations with and without migration. In that case, our ABC–MCMC approach needs considerably lower computation time to reach the same accuracy than conventional ABC. We then apply our method to a more complex case with the estimation of divergence times and migration rates between three African populations. PMID:19506307

  9. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    SciTech Connect

    Lai, Chao-Jen Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2015-06-15

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical

  10. A Monte Carlo tool for raster-scanning particle therapy dose computation

    NASA Astrophysics Data System (ADS)

    Jelen, U.; Radon, M.; Santiago, A.; Wittig, A.; Ammazzalorso, F.

    2014-03-01

    Purpose of this work was to implement Monte Carlo (MC) dose computation in realistic patient geometries with raster-scanning, the most advanced ion beam delivery technique, combining magnetic beam deflection with energy variation. FLUKA, a Monte Carlo package well-established in particle therapy applications, was extended to simulate raster-scanning delivery with clinical data, unavailable as built-in feature. A new complex beam source, compatible with FLUKA public programming interface, was implemented in Fortran to model the specific properties of raster-scanning, i.e. delivery by means of multiple spot sources with variable spatial distributions, energies and numbers of particles. The source was plugged into the MC engine through the user hook system provided by FLUKA. Additionally, routines were provided to populate the beam source with treatment plan data, stored as DICOM RTPlan or TRiP98's RST format, enabling MC recomputation of clinical plans. Finally, facilities were integrated to read computerised tomography (CT) data into FLUKA. The tool was used to recompute two representative carbon ion treatment plans, a skull base and a prostate case, prepared with analytical dose calculation (TRiP98). Selected, clinically relevant issues influencing the dose distributions were investigated: (1) presence of positioning errors, (2) influence of fiducial markers and (3) variations in pencil beam width. Notable differences in modelling of these challenging situations were observed between the analytical and Monte Carlo results. In conclusion, a tool was developed, to support particle therapy research and treatment, when high precision MC calculations are required, e.g. in presence of severe density heterogeneities or in quality assurance procedures.

  11. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    SciTech Connect

    Sarrut, David; Guigues, Laurent

    2008-04-15

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy.

  12. Approximate Bayesian Computation Using Markov Chain Monte Carlo Simulation: Theory, Concepts, and Applications

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Vrugt, J. A.

    2013-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to

  13. Scatter correction for kilovoltage cone-beam computed tomography (CBCT) images using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Jarry, G.; Graham, S. A.; Jaffray, D. A.; Moseley, D. J.; Verhaegen, F.

    2006-03-01

    In this work Monte Carlo (MC) simulations are used to correct kilovoltage (kV) cone-beam computed tomographic (CBCT) projections for scatter radiation. All images were acquired using a kV CBCT bench-top system composed of an x-ray tube, a rotation stage and a flat-panel imager. The EGSnrc MC code was used to model the system. BEAMnrc was used to model the x-ray tube while a modified version of the DOSXYZnrc program was used to transport the particles through various phantoms and score phase space files with identified scattered and primary particles. An analytical program was used to read the phase space files and produce image files. The scatter correction was implemented by subtracting Monte Carlo predicted scatter distribution from measured projection images; these projection images were then reconstructed. Corrected reconstructions showed an important improvement in image quality. Several approaches to reduce the simulation time were tested. To reduce the number of simulated scatter projections, the effect of varying the projection angle on the scatter distribution was evaluated for different geometries. It was found that the scatter distribution does not vary significantly over a 30-degree interval for the geometries tested. It was also established that increasing the size of the voxels in the voxelized phantom does not affect the scatter distribution but reduces the simulation time. Different techniques to smooth the scatter distribution were also investigated.

  14. In Vivo Measurement System Calibration Using Magnetic Resonance Imaging and Monte Carlo Computations

    NASA Astrophysics Data System (ADS)

    Mallett, Michael Wesley

    1993-01-01

    A new method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations is presented. The method employs the enhanced three-point Dixon technique for producing pure fat and water images of the human body. This information is used to model the scattering media for transport calculations using the current version of the MCNP code (version 4). Development work utilizing a sample fat/water matrix compared well with laboratory measurements. Calibration of an in vivo measurement system using the BOMAB phantom, as compared with Monte Carlo modeling of this procedure, is presented as verification of the MCNP code. Verification of the integrated MRI-MCNP method is shown for a specially designed phantom composed of fat, water, air, and a bone substitute material (acrylic plastic). Implementation of the MRI-MCNP method is demonstrated for an in vivo measurement system. Failures inherent to the current method are discussed, including the inability of the imaging technique to explicitly discriminate between air and bone tissue, and the presence of mismapping errors within the pure fat/water images. Post processing techniques performed on the three-point Dixon images are demonstrated as a potential means of resolving these problems. A modified version of the MCNP code specifically for handling MRI data is also discussed.

  15. The Monte Carlo Integration Computer as an Instructional Model for the Simulation of Equilibrium and Kinetic Chemical Processes: The Development and Evaluation of a Teaching Aid.

    ERIC Educational Resources Information Center

    Wood, Dean Arthur

    A special purpose digital computer which utilizes the Monte Carlo integration method of obtaining simulations of chemical processes was developed and constructed. The computer, designated as the Monte Carlo Integration Computer (MCIC), was designed as an instructional model for the illustration of kinetic and equilibrium processes, and was…

  16. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations. PMID:17946330

  17. Differential Monte Carlo method for computing seismogram envelopes and their partial derivatives

    NASA Astrophysics Data System (ADS)

    Takeuchi, Nozomu

    2016-05-01

    We present an efficient method that is applicable to waveform inversions of seismogram envelopes for structural parameters describing scattering properties in the Earth. We developed a differential Monte Carlo method that can simultaneously compute synthetic envelopes and their partial derivatives with respect to structural parameters, which greatly reduces the required CPU time. Our method has no theoretical limitations to apply to the problems with anisotropic scattering in a heterogeneous background medium. The effects of S wave polarity directions and phase differences between SH and SV components are taken into account. Several numerical examples are presented to show that the intrinsic and scattering attenuation at the depth range of the asthenosphere have different impacts on the observed seismogram envelopes, thus suggesting that our method can potentially be applied to inversions for scattering properties in the deep Earth.

  18. Monte Carlo computation of nonequilibrium flow in a hypersonic iodine wind tunnel

    SciTech Connect

    Boyd, I.D.; Pham-van-Diep, G.C.; Muntz, E.P. )

    1994-05-01

    The nonequilibrium flow formed by the interaction of a freejet of iodine vapor impinging on a blunt body is investigated using numerical and experimental techniques. The computational approach employs the direct simulation Monte Carlo method. The experimental measurements consist of rotational temperature obtained along the flow axis and include portions of both the freejet expansion and blunt-body shock for four different stagnation conditions. Direct comparisons of the numerical results and the experimental data are quite successful at moderate temperatures. Hence, the rotational collision time of iodine is estimated in the temperature range of 100-500 K. At higher temperatures, the agreement between simulation and measurement is less satisfactory. This demonstrates the requirement for the development of a more detailed approach to simulating rotational nonequilibrium in high-temperature flows of diatomic species. 20 refs.

  19. Organ doses for reference adult male and female undergoing computed tomography estimated by Monte Carlo simulations

    SciTech Connect

    Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E.

    2011-03-15

    Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different

  20. Organ doses for reference adult male and female undergoing computed tomography estimated by Monte Carlo simulations

    PubMed Central

    Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E.

    2011-01-01

    Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantoms Methods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen∕pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen∕pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for

  1. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    SciTech Connect

    Hendricks, J.S.; Brockhoff, R.C. . Applied Theoretical Physics Division)

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and the Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.

  2. Benchmarking computations using the Monte Carlo code ritracks with data from a tissue equivalent proportional counter

    NASA Astrophysics Data System (ADS)

    Brogan, John

    Understanding the dosimetry for high-energy, heavy ions (HZE), especially within living systems, is complex and requires the use of both experimental and computational methods. Tissue-equivalent proportional counters (TEPCs) have been used experimentally to measure energy deposition in volumes similar in dimension to a mammalian cell. As these experiments begin to include a wider range of ions and energies, considerations to cost, time, and radiation protection are necessary and may limit the extent of these studies. Multiple Monte Carlo computational codes have been created to remediate this problem and serve as a mode of verification for pervious experimental methods. One such code, Relativistic-Ion Tracks (RITRACKS), is currently being developed at the NASA Johnson Space center. RITRACKS was designed to describe patterns of ionizations responsible for DNA damage on the molecular scale (nanometers). This study extends RITRACKS version 3.07 into the microdosimetric scale (microns), and compares computational results to previous experimental TEPC data. Energy deposition measurements for 1000 MeV nucleon-1 Fe ions in a 1 micron spherical target were compared. Different settings within RITRACKS were tested to verify their effects on dose to a target and the resulting energy deposition frequency distribution. The results were then compared to the TEPC data.

  3. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

  4. Estimation of computed tomography dose index in cone beam computed tomography: MOSFET measurements and Monte Carlo simulations.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald

    2010-05-01

    To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry. PMID:20386198

  5. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  6. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  7. Pediatric personalized CT-dosimetry Monte Carlo simulations, using computational phantoms

    NASA Astrophysics Data System (ADS)

    Papadimitroulas, P.; Kagadis, G. C.; Ploussi, A.; Kordolaimi, S.; Papamichail, D.; Karavasilis, E.; Syrgiamiotis, V.; Loudos, G.

    2015-09-01

    The last 40 years Monte Carlo (MC) simulations serve as a “gold standard” tool for a wide range of applications in the field of medical physics and tend to be essential in daily clinical practice. Regarding diagnostic imaging applications, such as computed tomography (CT), the assessment of deposited energy is of high interest, so as to better analyze the risks and the benefits of the procedure. The last few years a big effort is done towards personalized dosimetry, especially in pediatric applications. In the present study the GATE toolkit was used and computational pediatric phantoms have been modeled for the assessment of CT examinations dosimetry. The pediatric models used come from the XCAT and IT'IS series. The X-ray spectrum of a Brightspeed CT scanner was simulated and validated with experimental data. Specifically, a DCT-10 ionization chamber was irradiated twice using 120 kVp with 100 mAs and 200 mAs, for 1 sec in 1 central axial slice (thickness = 10mm). The absorbed dose was measured in air resulting in differences lower than 4% between the experimental and simulated data. The simulations were acquired using ∼1010 number of primaries in order to achieve low statistical uncertainties. Dose maps were also saved for quantification of the absorbed dose in several children critical organs during CT acquisition.

  8. Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.

    PubMed

    Edwards, Stephen; Schick, Daniel

    2016-04-01

    Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model. PMID:26910026

  9. On optimality of kernels for approximate Bayesian computation using sequential Monte Carlo.

    PubMed

    Filippi, Sarah; Barnes, Chris P; Cornebise, Julien; Stumpf, Michael P H

    2013-03-01

    Approximate Bayesian computation (ABC) has gained popularity over the past few years for the analysis of complex models arising in population genetics, epidemiology and system biology. Sequential Monte Carlo (SMC) approaches have become work-horses in ABC. Here we discuss how to construct the perturbation kernels that are required in ABC SMC approaches, in order to construct a sequence of distributions that start out from a suitably defined prior and converge towards the unknown posterior. We derive optimality criteria for different kernels, which are based on the Kullback-Leibler divergence between a distribution and the distribution of the perturbed particles. We will show that for many complicated posterior distributions, locally adapted kernels tend to show the best performance. We find that the added moderate cost of adapting kernel functions is easily regained in terms of the higher acceptance rate. We demonstrate the computational efficiency gains in a range of toy examples which illustrate some of the challenges faced in real-world applications of ABC, before turning to two demanding parameter inference problems in molecular biology, which highlight the huge increases in efficiency that can be gained from choice of optimal kernels. We conclude with a general discussion of the rational choice of perturbation kernels in ABC SMC settings. PMID:23502346

  10. Online object oriented Monte Carlo computational tool for the needs of biomedical optics

    PubMed Central

    Doronin, Alexander; Meglinski, Igor

    2011-01-01

    Conceptual engineering design and optimization of laser-based imaging techniques and optical diagnostic systems used in the field of biomedical optics requires a clear understanding of the light-tissue interaction and peculiarities of localization of the detected optical radiation within the medium. The description of photon migration within the turbid tissue-like media is based on the concept of radiative transfer that forms a basis of Monte Carlo (MC) modeling. An opportunity of direct simulation of influence of structural variations of biological tissues on the probing light makes MC a primary tool for biomedical optics and optical engineering. Due to the diversity of optical modalities utilizing different properties of light and mechanisms of light-tissue interactions a new MC code is typically required to be developed for the particular diagnostic application. In current paper introducing an object oriented concept of MC modeling and utilizing modern web applications we present the generalized online computational tool suitable for the major applications in biophotonics. The computation is supported by NVIDEA CUDA Graphics Processing Unit providing acceleration of modeling up to 340 times. PMID:21991540

  11. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    SciTech Connect

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  12. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    SciTech Connect

    Ramos-Mendez, Jose; Perl, Joseph; Faddegon, Bruce; Schuemann, Jan; Paganetti, Harald

    2013-04-15

    Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations

  13. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    PubMed Central

    Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald

    2013-01-01

    Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for

  14. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    SciTech Connect

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  15. Reconstruction for proton computed tomography by tracing proton trajectories – A Monte Carlo study

    PubMed Central

    Li, Tianfang; Liang, Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W.

    2006-01-01

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the Algebraic Reconstruction Technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP (2 line pairs (lp) cm-1) to the curved CSP and MLP path estimates (5 lp cm-1). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates. PMID:16878573

  16. Computed tomography with a low-intensity proton flux: results of a Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Schulte, Reinhard W.; Klock, Margio C. L.; Bashkirov, Vladimir; Evseev, Ivan G.; de Assis, Joaquim T.; Yevseyeva, Olga; Lopes, Ricardo T.; Li, Tianfang; Williams, David C.; Wroe, Andrew J.; Schelin, Hugo R.

    2004-10-01

    Conformal proton radiation therapy requires accurate prediction of the Bragg peak position. This problem may be solved by using protons rather than conventional x-rays to determine the relative electron density distribution via proton computed tomography (proton CT). However, proton CT has its own limitations, which need to be carefully studied before this technique can be introduced into routine clinical practice. In this work, we have used analytical relationships as well as the Monte Carlo simulation tool GEANT4 to study the principal resolution limits of proton CT. The GEANT4 simulations were validated by comparing them to predictions of the Bethe Bloch theory and Tschalar's theory of energy loss straggling, and were found to be in good agreement. The relationship between phantom thickness, initial energy, and the relative electron density uncertainty was systematically investigated to estimate the number of protons and dose needed to obtain a given density resolution. The predictions of this study were verified by simulating the performance of a hypothetical proton CT scanner when imaging a cylindrical water phantom with embedded density inhomogeneities. We show that a reasonable density resolution can be achieved with a relatively small number of protons, thus providing a possible dose advantage over x-ray CT.

  17. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    SciTech Connect

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.; Satogata, Todd J.; Williams, David C.; Schulte, Reinhard W.

    2006-03-15

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes a straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.

  18. Fast photon-boundary intersection computation for Monte Carlo simulation of photon migration

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaofen; Liu, Hongyan; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-01-01

    Monte Carlo (MC) method is generally used as a "gold standard" technique to simulate photon transport in biomedical optics. However, it is quite time-consuming since abundant photon propagations need to be simulated in order to achieve an accurate result. In the case of complicated geometry, the computation speed is bound up with the calculation of the intersection between the photon transmission path and media boundary. The ray-triangle-based method is often used to calculate the photon-boundary intersection in the shape-based MC simulation for light propagation, but it is still relatively time-consuming. We present a fast way to determine the photon-boundary intersection. Triangle meshes are used to describe the boundary structure. A line segment instead of a ray is used to check if there exists a photon-boundary intersection, as the next location of the photon in light transports is determined by the step size. Results suggest that by simply replacing the conventional ray-triangle-based method with the proposed line segment-triangle-based method, the MC simulation for light propagation in the mouse model can be speeded up by more than 35%.

  19. Monte Carlo simulation of electrothermal atomization on a desktop personal computer

    NASA Astrophysics Data System (ADS)

    Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.

    1996-07-01

    Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.

  20. Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

    SciTech Connect

    BARTEL, TIMOTHY J.; PLIMPTON, STEVEN J.; GALLIS, MICHAIL A.

    2001-10-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.

  1. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  2. Organ doses for reference pediatric and adolescent patients undergoing computed tomography estimated by Monte Carlo simulation

    SciTech Connect

    Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel J.; Bolch, Wesley E.

    2012-04-15

    Purpose: To establish an organ dose database for pediatric and adolescent reference individuals undergoing computed tomography (CT) examinations by using Monte Carlo simulation. The data will permit rapid estimates of organ and effective doses for patients of different age, gender, examination type, and CT scanner model. Methods: The Monte Carlo simulation model of a Siemens Sensation 16 CT scanner previously published was employed as a base CT scanner model. A set of absorbed doses for 33 organs/tissues normalized to the product of 100 mAs and CTDI{sub vol} (mGy/100 mAs mGy) was established by coupling the CT scanner model with age-dependent reference pediatric hybrid phantoms. A series of single axial scans from the top of head to the feet of the phantoms was performed at a slice thickness of 10 mm, and at tube potentials of 80, 100, and 120 kVp. Using the established CTDI{sub vol}- and 100 mAs-normalized dose matrix, organ doses for different pediatric phantoms undergoing head, chest, abdomen-pelvis, and chest-abdomen-pelvis (CAP) scans with the Siemens Sensation 16 scanner were estimated and analyzed. The results were then compared with the values obtained from three independent published methods: CT-Expo software, organ dose for abdominal CT scan derived empirically from patient abdominal circumference, and effective dose per dose-length product (DLP). Results: Organ and effective doses were calculated and normalized to 100 mAs and CTDI{sub vol} for different CT examinations. At the same technical setting, dose to the organs, which were entirely included in the CT beam coverage, were higher by from 40 to 80% for newborn phantoms compared to those of 15-year phantoms. An increase of tube potential from 80 to 120 kVp resulted in 2.5-2.9-fold greater brain dose for head scans. The results from this study were compared with three different published studies and/or techniques. First, organ doses were compared to those given by CT-Expo which revealed dose

  3. Monte-Carlo computation of turbulent premixed methane/air ignition

    NASA Astrophysics Data System (ADS)

    Carmen, Christina Lieselotte

    The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.

  4. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    NASA Astrophysics Data System (ADS)

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  5. Monte Carlo computer simulations of Venus equilibrium and global resurfacing models

    NASA Technical Reports Server (NTRS)

    Dawson, D. D.; Strom, R. G.; Schaber, G. G.

    1992-01-01

    Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.

  6. A Monte Carlo FORTRAN 200 programme for the determination of static properties of liquids vectorized to run on the CYBER 205 vector processing computer

    NASA Astrophysics Data System (ADS)

    Vogelsang, R.; Hoheisel, C.

    1987-08-01

    We present a Monte Carlo programme version written in Vector-FORTRAN 200 which allows a fast computation of thermodynamic properties of dense model fluids on the CYBER 205 vector processing computer. A comparison of the execution speed of this programme, a scalar version and a vectorized molecular dynamics programme showed the following: (i) the vectorized form of the Monte Carlo programme runs about a factor of 8 faster on the CYBER 205 than the scalar version on the conventional computer CYBER 855; (ii) for small ensembles of 32-108 particles, the Monte Carlo programme is of about the velocity as the molecular dynamics one. However, for larger numbers of particles, the molecular dynamics programme is vastly faster executed on the CYBER 205 than the Monte Carlo programme, particularly when neighbour tables are used. We propose a technique to accelerate the Monte Carlo programme for larger ensembles.

  7. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    SciTech Connect

    Ono, T; Araki, F

    2014-06-01

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.

  8. The use of computed tomography images in Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Bazalova, Magdalena

    Monte Carlo (MC) dose calculations cannot accurately assess the dose delivered to the patient during radiotherapy unless the patient anatomy is well known. This thesis focuses on the conversion of patient computed tomography (CT) images into MC geometry files. Metal streaking artifacts and their effect on MC dose calculations are first studied. A correction algorithm is applied to artifact-corrupted images and dose errors due to density and tissue mis-assignment are quantified in a phantom and a patient study. The correction algorithm and MC dose calculations for various treatment beams are also investigated using phantoms with real hip prostheses. As a result of this study, we suggest that a metal artifact correction algorithm should be a part of any MC treatment planning. By means of MC simulations, scatter is proven to be a major cause of metal artifacts. The use of dual-energy CT (DECT) for a novel tissue segmentation scheme is thoroughly investigated. First, MC simulations are used to determine the optimal beam filtration for an accurate DECT material extraction. DECT is then tested on a CT scanner with a phantom and a good agreement in the extraction of two material properties, the relative electron density rhoe and the effective atomic number Z is found. Compared to the conventional tissue segmentation based on rhoe-differences, the novel tissue segmentation scheme uses differences in both rhoe and Z. The phantom study demonstrates that the novel method based on rhoe and Z information works well and makes MC dose calculations more accurate. This thesis demonstrates that DECT suppresses streaking artifacts from brachytherapy seeds. Brachytherapy MC dose calculations using single-energy CT images with artifacts and DECT images with suppressed artifacts are performed and the effect of artifact reduction is investigated. The patient and canine DECT studies also show that image noise and object motion are very important factors in DECT. A solution for reduction

  9. Development of a method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    SciTech Connect

    Mallett, M.W.; Poston, J.W.; Hickman, D.P.

    1995-06-01

    Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.

  10. Monte Carlo simulations of adult and pediatric computed tomography exams: Validation studies of organ doses with physical phantoms

    SciTech Connect

    Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.

    2013-01-15

    Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or helical CT

  11. Monte Carlo simulations of adult and pediatric computed tomography exams: Validation studies of organ doses with physical phantoms

    PubMed Central

    Long, Daniel J.; Lee, Choonsik; Tien, Christopher; Fisher, Ryan; Hoerner, Matthew R.; Hintenlang, David; Bolch, Wesley E.

    2013-01-01

    Purpose: To validate the accuracy of a Monte Carlo source model of the Siemens SOMATOM Sensation 16 CT scanner using organ doses measured in physical anthropomorphic phantoms. Methods: The x-ray output of the Siemens SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code, MCNPX version 2.6. The resulting source model was able to perform various simulated axial and helical computed tomographic (CT) scans of varying scan parameters, including beam energy, filtration, pitch, and beam collimation. Two custom-built anthropomorphic phantoms were used to take dose measurements on the CT scanner: an adult male and a 9-month-old. The adult male is a physical replica of the University of Florida reference adult male hybrid computational phantom, while the 9-month-old is a replica of the University of Florida Series B 9-month-old voxel computational phantom. Each phantom underwent a series of axial and helical CT scans, during which organ doses were measured using fiber-optic coupled plastic scintillator dosimeters developed at the University of Florida. The physical setup was reproduced and simulated in MCNPX using the CT source model and the computational phantoms upon which the anthropomorphic phantoms were constructed. Average organ doses were then calculated based upon these MCNPX results. Results: For all CT scans, good agreement was seen between measured and simulated organ doses. For the adult male, the percent differences were within 16% for axial scans, and within 18% for helical scans. For the 9-month-old, the percent differences were all within 15% for both the axial and helical scans. These results are comparable to previously published validation studies using GE scanners and commercially available anthropomorphic phantoms. Conclusions: Overall results of this study show that the Monte Carlo source model can be used to accurately and reliably calculate organ doses for patients undergoing a variety of axial or

  12. A Monte Carlo investigation of cumulative dose measurements for cone beam computed tomography (CBCT) dosimetry

    NASA Astrophysics Data System (ADS)

    Abuhaimed, Abdullah; Martin, Colin J.; Sankaralingam, Marimuthu; Gentle, David J.

    2015-02-01

    Many studies have shown that the computed tomography dose index (CTDI100) which is considered as a main dose descriptor for CT dosimetry fails to provide a realistic reflection of the dose involved in cone beam computed tomography (CBCT) scans. Several practical approaches have been proposed to overcome drawbacks of the CTDI100. One of these is the cumulative dose concept. The purpose of this study was to investigate four different approaches based on the cumulative dose concept: the cumulative dose (1) f(0,150) and (2) f(0,∞) with a small ionization chamber 20 mm long, and the cumulative dose (3) f100(150) and (4) f100(∞) with a standard 100 mm pencil ionization chamber. The study also aimed to investigate the influence of using the 20 and 100 mm chambers and the standard and the infinitely long phantoms on cumulative dose measurements. Monte Carlo EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc codes were used to simulate a kV imaging system integrated with a TrueBeam linear accelerator and to calculate doses within cylindrical head and body PMMA phantoms with diameters of 16 cm and 32 cm, respectively, and lengths of 150, 600, 900 mm. f(0,150) and f100(150) approaches were studied within the standard PMMA phantoms (150 mm), while the other approaches f(0,∞) and f100(∞) were within infinitely long head (600 mm) and body (900 mm) phantoms. CTDI∞ values were used as a standard to compare the dose values for the approaches studied at the centre and periphery of the phantoms and for the weighted values. Four scanning protocols and beams of width 20-300 mm were used. It has been shown that the f(0,∞) approach gave the highest dose values which were comparable to CTDI∞ values for wide beams. The differences between the weighted dose values obtained with the 20 and 100 mm chambers were significant for the beam widths <120 mm, but these differences declined with increasing beam widths to be within 4%. The weighted dose values calculated within

  13. Kinetic Monte Carlo simulations of surface reactions on supported nanoparticles: A novel approach and computer code

    NASA Astrophysics Data System (ADS)

    Kunz, Lothar; Kuhn, Frank M.; Deutschmann, Olaf

    2015-07-01

    So far most kinetic Monte Carlo (kMC) simulations of heterogeneously catalyzed gas phase reactions were limited to flat crystal surfaces. The newly developed program MoCKA (Monte Carlo Karlsruhe) combines graph-theoretical and lattice-based principles to be able to efficiently handle multiple lattices with a large number of sites, which account for different facets of the catalytic nanoparticle and the support material, and pursues a general approach, which is not restricted to a specific surface or reaction. The implementation uses the efficient variable step size method and applies a fast update algorithm for its process list. It is shown that the analysis of communication between facets and of (reverse) spillover effects is possible by rewinding the kMC simulation. Hence, this approach offers a wide range of new applications for kMC simulations in heterogeneous catalysis.

  14. PROBLEM DEPENDENT DOPPLER BROADENING OF CONTINUOUS ENERGY CROSS SECTIONS IN THE KENO MONTE CARLO COMPUTER CODE

    SciTech Connect

    Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C

    2014-01-01

    For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.

  15. Development of 1-year-old computational phantom and calculation of organ doses during CT scans using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli

    2014-09-01

    With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.

  16. Computational fluid dynamics / Monte Carlo simulation of dusty gas flow in a "rotor-stator" set of airfoil cascades

    NASA Astrophysics Data System (ADS)

    Tsirkunov, Yu. M.; Romanyuk, D. A.

    2016-07-01

    A dusty gas flow through two, moving and immovable, cascades of airfoils (blades) is studied numerically. In the mathematical model of two-phase gas-particle flow, the carrier gas is treated as a continuum and it is described by the Navier-Stokes equations (pseudo-DNS (direct numerical simulation) approach) or the Reynolds averaged Navier-Stokes (RANS) equations (unsteady RANS approach) with the Menter k-ω shear stress transport (SST) turbulence model. The governing equations in both cases are solved by computational fluid dynamics (CFD) methods. The dispersed phase is treated as a discrete set of solid particles, the behavior of which is described by the generalized kinetic Boltzmann equation. The effects of gas-particle interaction, interparticle collisions, and particle scattering in particle-blade collisions are taken into account. The direct simulation Monte Carlo (DSMC) method is used for computational simulation of the dispersed phase flow. The effects of interparticle collisions and particle scattering are discussed.

  17. A Monte-Carlo based extension of the Meteor Orbit and Trajectory Software (MOTS) for computations of orbital elements

    NASA Astrophysics Data System (ADS)

    Albin, T.; Koschny, D.; Soja, R.; Srama, R.; Poppe, B.

    2016-01-01

    The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor camera system (Koschny et al., 2013; Koschny et al., 2014) that consists of 5 cameras. The two cameras considered in this report are ICC7 and ICC9, and are installed on Tenerife and La Palma. They point to the same atmospheric volume between both islands allowing stereoscopic observation of meteors. Since its installation in 2011 and the start of operation in 2012 CILBO has detected over 15000 simultaneously observed meteors. Koschny and Diaz (2002) developed the Meteor Orbit and Trajectory Software (MOTS) to compute the trajectory of such meteors. The software uses the astrometric data from the detection software MetRec (Molau, 1998) and determines the trajectory in geodetic coordinates. This work presents a Monte-Carlo based extension of the MOTS code to compute the orbital elements of simultaneously detected meteors by CILBO.

  18. Octree indexing of DICOM images for voxel number reduction and improvement of Monte Carlo simulation computing efficiency

    SciTech Connect

    Hubert-Tremblay, Vincent; Archambault, Louis; Tubic, Dragan; Roy, Rene; Beaulieu, Luc

    2006-08-15

    The purpose of the present study is to introduce a compression algorithm for the CT (computed tomography) data used in Monte Carlo simulations. Performing simulations on the CT data implies large computational costs as well as large memory requirements since the number of voxels in such data reaches typically into hundreds of millions voxels. CT data, however, contain homogeneous regions which could be regrouped to form larger voxels without affecting the simulation's accuracy. Based on this property we propose a compression algorithm based on octrees: in homogeneous regions the algorithm replaces groups of voxels with a smaller number of larger voxels. This reduces the number of voxels while keeping the critical high-density gradient area. Results obtained using the present algorithm on both phantom and clinical data show that compression rates up to 75% are possible without losing the dosimetric accuracy of the simulation.

  19. Program for Efficient Monte Carlo Computations of Quenched SU(3) Lattice Gauge Theory Using the Quasi-heatbath Method on a CDC CYBER 205 Computer

    NASA Astrophysics Data System (ADS)

    Kennedy, A. D.; Kuti, J.; Meyer, S.; Pendleton, B. J.

    1986-05-01

    We describe the program SZINHUR which performs a Monte Carlo measurement of properties of lattice Quantum Chromodynamics. It uses the Quasi-Heatbath updating algorithm, which is known to reduce the correlations between successive sweeps through the spacetime lattice giving a performance improvement by a factor of roughly two over the tenhit Metropolis procedure. The program measures the Polyakov loop and its correlation function. The program is highly vectorized and runs on a one-pipe CDC CYBER 205 at a speed of 53 μsec/link, which corresponds to an average computation rate of 93 Mflops. The program would run at almost twice this speed on a two-pipe machine.

  20. A comparison study of modal parameter confidence intervals computed using the Monte Carlo and Bootstrap techniques

    SciTech Connect

    Doebling, S.W.; Farrar, C.R.; Cornwell, P.J.

    1998-02-01

    This paper presents a comparison of two techniques used to estimate the statistical confidence intervals on modal parameters identified from measured vibration data. The first technique is Monte Carlo simulation, which involves the repeated simulation of random data sets based on the statistics of the measured data and an assumed distribution of the variability in the measured data. A standard modal identification procedure is repeatedly applied to the randomly perturbed data sets to form a statistical distribution on the identified modal parameters. The second technique is the Bootstrap approach, where individual Frequency Response Function (FRF) measurements are randomly selected with replacement to form an ensemble average. This procedure, in effect, randomly weights the various FRF measurements. These weighted averages of the FRFs are then put through the modal identification procedure. The modal parameters identified from each randomly weighted data set are then used to define a statistical distribution for these parameters. The basic difference in the two techniques is that the Monte Carlo technique requires the assumption on the form of the distribution of the variability in the measured data, while the bootstrap technique does not. Also, the Monte Carlo technique can only estimate random errors, while the bootstrap statistics represent both random and bias (systematic) variability such as that arising from changing environmental conditions. However, the bootstrap technique requires that every frequency response function be saved for each average during the data acquisition process. Neither method can account for bias introduced during the estimation of the FRFs. This study has been motivated by a program to develop vibration-based damage identification procedures.

  1. TH-A-19A-10: Fast Four Dimensional Monte Carlo Dose Computations for Proton Therapy of Lung Cancer

    SciTech Connect

    Mirkovic, D; Titt, U; Mohan, R; Yepes, P

    2014-06-15

    Purpose: To develop and validate a fast and accurate four dimensional (4D) Monte Carlo (MC) dose computation system for proton therapy of lung cancer and other thoracic and abdominal malignancies in which the delivered dose distributions can be affected by respiratory motion of the patient. Methods: A 4D computer tomography (CT) scan for a lung cancer patient treated with protons in our clinic was used to create a time dependent patient model using our in-house, MCNPX-based Monte Carlo system (“MC{sup 2}”). The beam line configurations for two passively scattered proton beams used in the actual treatment were extracted from the clinical treatment plan and a set of input files was created automatically using MC{sup 2}. A full MC simulation of the beam line was computed using MCNPX and a set of phase space files for each beam was collected at the distal surface of the range compensator. The particles from these phase space files were transported through the 10 voxelized patient models corresponding to the 10 phases of the breathing cycle in the 4DCT, using MCNPX and an accelerated (fast) MC code called “FDC”, developed by us and which is based on the track repeating algorithm. The accuracy of the fast algorithm was assessed by comparing the two time dependent dose distributions. Results: The error of less than 1% in 100% of the voxels in all phases of the breathing cycle was achieved using this method with a speedup of more than 1000 times. Conclusion: The proposed method, which uses full MC to simulate the beam line and the accelerated MC code FDC for the time consuming particle transport inside the complex, time dependent, geometry of the patient shows excellent accuracy together with an extraordinary speed.

  2. Prediction of beam hardening artefacts in computed tomography using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Thomsen, M.; Knudsen, E. B.; Willendrup, P. K.; Bech, M.; Willner, M.; Pfeiffer, F.; Poulsen, M.; Lefmann, K.; Feidenhans'l, R.

    2015-01-01

    We show how radiological images of both single and multi material samples can be simulated using the Monte Carlo simulation tool McXtrace and how these images can be used to make a three dimensional reconstruction. Good numerical agreement between the X-ray attenuation coefficient in experimental and simulated data can be obtained, which allows us to use simulated projections in the linearisation procedure for single material samples and in that way reduce beam hardening artefacts. The simulations can be used to predict beam hardening artefacts in multi material samples with complex geometry, illustrated with an example. Linearisation requires knowledge about the X-ray transmission at varying sample thickness, but in some cases homogeneous calibration phantoms are hard to manufacture, which affects the accuracy of the calibration. Using simulated data overcomes the manufacturing problems and in that way improves the calibration.

  3. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes.

    PubMed

    Khajeh, Masoud; Safigholi, Habib

    2016-03-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  4. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  5. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations

    SciTech Connect

    Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert

    2006-09-15

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences

  6. Accuracy of patient dose calculation for lung IMRT: A comparison of Monte Carlo, convolution/superposition, and pencil beam computations.

    PubMed

    Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert

    2006-09-01

    The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both

  7. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2014-11-01

    This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.

  8. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    PubMed

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry. PMID:26205316

  9. Monte Carlo computer simulations and electron microscopy of colloidal cluster formation via emulsion droplet evaporation

    NASA Astrophysics Data System (ADS)

    Schwarz, Ingmar; Fortini, Andrea; Wagner, Claudia Simone; Wittemann, Alexander; Schmidt, Matthias

    2011-12-01

    We consider a theoretical model for a binary mixture of colloidal particles and spherical emulsion droplets. The hard sphere colloids interact via additional short-ranged attraction and long-ranged repulsion. The droplet-colloid interaction is an attractive well at the droplet surface, which induces the Pickering effect. The droplet-droplet interaction is a hard-core interaction. The droplets shrink in time, which models the evaporation of the dispersed (oil) phase, and we use Monte Carlo simulations for the dynamics. In the experiments, polystyrene particles were assembled using toluene droplets as templates. The arrangement of the particles on the surface of the droplets was analyzed with cryogenic field emission scanning electron microscopy. Before evaporation of the oil, the particle distribution on the droplet surface was found to be disordered in experiments, and the simulations reproduce this effect. After complete evaporation, ordered colloidal clusters are formed that are stable against thermal fluctuations. Both in the simulations and with field emission scanning electron microscopy, we find stable packings that range from doublets, triplets, and tetrahedra to complex polyhedra of colloids. The simulated cluster structures and size distribution agree well with the experimental results. We also simulate hierarchical assembly in a mixture of tetrahedral clusters and droplets, and find supercluster structures with morphologies that are more complex than those of clusters of single particles.

  10. Current Status on the use of Parallel Computing in Turbulent Reacting Flow Computations Involving Sprays, Monte Carlo PDF and Unstructured Grids. Chapter 4

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.

  11. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  12. An Educational MONTE CARLO Simulation/Animation Program for the Cosmic Rays Muons and a Prototype Computer-Driven Hardware Display.

    ERIC Educational Resources Information Center

    Kalkanis, G.; Sarris, M. M.

    1999-01-01

    Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…

  13. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    SciTech Connect

    Wang, Z; Gao, M

    2014-06-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.

  14. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  15. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  16. Reconfigurable computing for Monte Carlo simulations: Results and prospects of the Janus project

    NASA Astrophysics Data System (ADS)

    Baity-Jesi, M.; Baños, R. A.; Cruz, A.; Fernandez, L. A.; Gil-Narvion, J. M.; Gordillo-Guerrero, A.; Guidetti, M.; Iñiguez, D.; Maiorano, A.; Mantovani, F.; Marinari, E.; Martin-Mayor, V.; Monforte-Garcia, J.; Muñoz Sudupe, A.; Navarro, D.; Parisi, G.; Pivanti, M.; Perez-Gaviro, S.; Ricci-Tersenghi, F.; Ruiz-Lorenzo, J. J.; Schifano, S. F.; Seoane, B.; Tarancon, A.; Tellez, P.; Tripiccione, R.; Yllanes, D.

    2012-08-01

    We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non-equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin-glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

  17. Approximate Bayesian Computation using Markov Chain Monte Carlo simulation: DREAM(ABC)

    NASA Astrophysics Data System (ADS)

    Sadegh, Mojtaba; Vrugt, Jasper A.

    2014-08-01

    The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh (2013) to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for a residual-based likelihood function in favor of one or multiple different summary statistics that exhibit superior diagnostic power. Here we propose several methodological improvements over commonly used ABC sampling methods to permit inference of complex system models. Our methodology entitled DREAM(ABC) uses the DiffeRential Evolution Adaptive Metropolis algorithm as its main building block and takes advantage of a continuous fitness function to efficiently explore the behavioral model space. Three case studies demonstrate that DREAM(ABC) is at least an order of magnitude more efficient than commonly used ABC sampling methods for more complex models. DREAM(ABC) is also more amenable to distributed, multi-processor, implementation, a prerequisite to diagnostic inference of CPU-intensive system models.

  18. SU-E-T-584: Commissioning of the MC2 Monte Carlo Dose Computation Engine

    SciTech Connect

    Titt, U; Mirkovic, D; Liu, A; Ciangaru, G; Mohan, R; Anand, A; Perles, L

    2014-06-01

    Purpose: An automated system, MC2, was developed to convert DICOM proton therapy treatment plans into a sequence MCNPX input files, and submit these to a computing cluster. MC2 converts the results into DICOM format, and any treatment planning system can import the data for comparison vs. conventional dose predictions. This work describes the data and the efforts made to validate the MC2 system against measured dose profiles and how the system was calibrated to predict the correct number of monitor units (MUs) to deliver the prescribed dose. Methods: A set of simulated lateral and longitudinal profiles was compared to data measured for commissioning purposes and during annual quality assurance efforts. Acceptance criteria were relative dose differences smaller than 3% and differences in range (in water) of less than 2 mm. For two out of three double scattering beam lines validation results were already published. Spot checks were performed to assure proper performance. For the small snout, all available measurements were used for validation vs. simulated data. To calibrate the dose per MU, the energy deposition per source proton at the center of the spread out Bragg peaks (SOBPs) was recorded for a set of SOBPs from each option. Subsequently these were then scaled to the results of dose per MU determination based on published methods. The simulations of the doses in the magnetically scanned beam line were also validated vs. measured longitudinal and lateral profiles. The source parameters were fine tuned to achieve maximum agreement with measured data. The dosimetric calibration was performed by scoring energy deposition per proton, and scaling the results to a standard dose measurement of a 10 x 10 x 10 cm3 volume irradiation using 100 MU. Results: All simulated data passed the acceptance criteria. Conclusion: MC2 is fully validated and ready for clinical application.

  19. Advanced computational methods for nodal diffusion, Monte Carlo, and S{sub N} problems. Progress report, January 1, 1992--March 31, 1993

    SciTech Connect

    Martin, W.R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  20. The application of Monte Carlo simulation to the design of collimators for single photon emission computed tomography

    NASA Astrophysics Data System (ADS)

    Cullum, Ian Derek

    Single photon emission computed tomography offers the potential for quantification of the uptake of radiopharmaceuticals in-vivo. This thesis investigates some of the factors which limit the accuracy of these methods for measurements in the human brain and investigates how the errors can be reduced. Modifications to data collection devices rather than image reconstruction techniques are studied. To assess the impact of errors on images, a set of computer generated test objects were developed. These included standard Anger and Phelps phantoms and a series of slices of the human brain taken from an atlas of transmission tomography. System design involves a balance between resolution and noise in the image. The optimal resolution depends on the data collection system, the uptake characteristics of the radiopharmaceutical and object size. A method to determine this resolution was developed and showed a single-slice system employing focused, probe detectors to offer greater potential for quantification in the brain than systems based on multiple Anger gamma cameras. A collimation system must be designed to achieve the required resolution. Classical, geometric design is not satisfactory in the presence of scattering materials. For this reason a Monte Carlo simulation allowing flexible choice of collimator parameters and source distribution was developed. The simulation was fully tested and then used to predict the performance of collimators for probe and camera based systems. These assessments were carried out for the 'worst case source' which was a concept developed and validated to allow faster prediction of collimator performance. In essence the geometry of this source is such as to allow a resolution measurement to be made which represents the worst value expected from the system. The effect of changes in collimation on image quality was assessed using the computer phantoms and simulation of the data acquisition process on the singleslice system. These data were

  1. Spectral computed tomography for quantitative decomposition of vulnerable plaques using a dual-energy technique: a Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Jo, B. D.; Park, S.-J.; Kim, H. M.; Kim, D. H.; Kim, H.-J.

    2016-02-01

    A spectral computed tomography (CT) system based on an energy-resolved photon-counting Cadmium Zinc Telluride (CZT) detector with a dual energy technique can provide spectral information and can possibly distinguish between two or more materials with a single X-ray exposure using energy thresholds. This work provides the potential for three-material decomposition of vulnerable plaques using two inverse fitting functions. Additionally, there exists the possibility of using gold nanoparticles as a contrast agent for the spectral CT system in conjunction with a CZT photon-counting detector. In this simulation study, we used fan beam CT geometry that consisted of a 90 kVp X-ray spectrum and performed calculations by using the SpekCal program (REAL Software, Inc.) with Monte Carlo simulations. A basic test phantom was imaged with the spectral CT system for the calibration and decomposition process. This phantom contained three different materials, including lipid, iodine and gold nanoparticles, with six holes 3 mm in diameter. In addition to reducing pile-up and charge sharing effect, the photon counting detector was considered an ideal detector. Then, the accuracy of material decomposition techniques with two inverse fitting functions were evaluated between decomposed images and reference images in terms of root mean square error (RMSE). The results showed that decomposed images had a good volumetric fraction for each material, and the RMSE between the measured and true volumes of lipid, iodine and gold nanoparticle fractions varied from 12.51% to 1.29% for inverse fitting functions. The study indicated that spectral CT in conjunction with a CZT photon-counting detector in conjunction with a dual energy technique can be used to identifying materials and may be a promising modality for quantifying material properties of vulnerable plaques.

  2. Evaluation of radiation dose to organs during kilovoltage cone-beam computed tomography using Monte Carlo simulation.

    PubMed

    Son, Kihong; Cho, Seungryong; Kim, Jin Sung; Han, Youngyih; Ju, Sang Gyu; Choi, Doo Ho

    2014-01-01

    Image-guided techniques for radiation therapy have improved the precision of radiation delivery by sparing normal tissues. Cone-beam computed tomography (CBCT) has emerged as a key technique for patient positioning and target localization in radiotherapy. Here, we investigated the imaging radiation dose delivered to radiosensitive organs of a patient during CBCT scan. The 4D extended cardiac-torso (XCAT) phantom and Geant4 Application for Tomographic Emission (GATE) Monte Carlo (MC) simulation tool were used for the study. A computed tomography dose index (CTDI) standard polymethyl methacrylate (PMMA) phantom was used to validate the MC-based dosimetric evaluation. We implemented an MC model of a clinical on-board imager integrated with the Trilogy accelerator. The MC model's accuracy was validated by comparing its weighted CTDI (CTDIw) values with those of previous studies, which revealed good agreement. We calculated the absorbed doses of various human organs at different treatment sites such as the head-and-neck, chest, abdomen, and pelvis regions, in both standard CBCT scan mode (125 kVp, 80 mA, and 25 ms) and low-dose scan mode (125 kVp, 40 mA, and 10 ms). In the former mode, the average absorbed doses of the organs in the head and neck and chest regions ranged 4.09-8.28 cGy, whereas those of the organs in the abdomen and pelvis regions were 4.30-7.48 cGy. In the latter mode, the absorbed doses of the organs in the head and neck and chest regions ranged 1.61-1.89 cGy, whereas those of the organs in the abdomen and pelvis region ranged between 0.79-1.85 cGy. The reduction in the radiation dose in the low-dose mode compared to the standard mode was about 20%, which is in good agreement with previous reports. We opine that the findings of this study would significantly facilitate decisions regarding the administration of extra imaging doses to radiosensitive organs. PMID:24710444

  3. Monte Carlo derivation of filtered tungsten anode X-ray spectra for dose computation in digital mammography*

    PubMed Central

    Paixão, Lucas; Oliveira, Bruno Beraldo; Viloria, Carolina; de Oliveira, Marcio Alves; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2015-01-01

    Objective Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography. PMID:26811553

  4. Evaluation of the effect of patient dose from cone beam computed tomography on prostate IMRT using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.; Islam, Mohammad K.; Norrlinger, Bernhard D.; Jaffray, David A.

    2008-01-15

    The aim of this study is to evaluate the impact of the patient dose due to the kilovoltage cone beam computed tomography (kV-CBCT) in a prostate intensity-modulated radiation therapy (IMRT). The dose distributions for the five prostate IMRTs were calculated using the Pinnacle3 treatment planning system. To calculate the patient dose from CBCT, phase-space beams of a CBCT head based on the ELEKTA x-ray volume imaging system were generated using the Monte Carlo BEAMnrc code for 100, 120, 130, and 140 kVp energies. An in-house graphical user interface called DOSCTP (DOSXYZnrc-based) developed using MATLAB was used to calculate the dose distributions due to a 360 deg. photon arc from the CBCT beam with the same patient CT image sets as used in Pinnacle3. The two calculated dose distributions were added together by setting the CBCT doses equal to 1%, 1.5%, 2%, and 2.5% of the prescription dose of the prostate IMRT. The prostate plan and the summed dose distributions were then processed in the CERR platform to determine the dose-volume histograms (DVHs) of the regions of interest. Moreover, dose profiles along the x- and y-axes crossing the isocenter with and without addition of the CBCT dose were determined. It was found that the added doses due to CBCT are most significant at the femur heads. Higher doses were found at the bones for a relatively low energy CBCT beam such as 100 kVp. Apart from the bones, the CBCT dose was observed to be most concentrated on the anterior and posterior side of the patient anatomy. Analysis of the DVHs for the prostate and other critical tissues showed that they vary only slightly with the added CBCT dose at different beam energies. On the other hand, the changes of the DVHs for the femur heads due to the CBCT dose and beam energy were more significant than those of rectal and bladder wall. By analyzing the vertical and horizontal dose profiles crossing the femur heads and isocenter, with and without the CBCT dose equal to 2% of the

  5. Computer simulation of supersonic rarefied gas flow in the transition region, about a spherical probe; a Monte Carlo approach with application to rocket-borne ion probe experiments

    NASA Technical Reports Server (NTRS)

    Horton, B. E.; Bowhill, S. A.

    1971-01-01

    This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.

  6. Comparison of the Results of MISSE 6 Atomic Oxygen Erosion Yields of Layered Kapton H Films with Monte Carlo Computational Predictions

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Groh, Kim De; Kneubel, Christian A.

    2014-01-01

    A space experiment flown as part of the Materials International Space Station Experiment 6B (MISSE 6B) was designed to compare the atomic oxygen erosion yield (Ey) of layers of Kapton H polyimide with no spacers between layers with that of layers of Kapton H with spacers between layers. The results were compared to a solid Kapton H (DuPont, Wilmington, DE) sample. Monte Carlo computational modeling was performed to optimize atomic oxygen interaction parameter values to match the results of both the MISSE 6B multilayer experiment and the undercut erosion profile from a crack defect in an aluminized Kapton H sample flown on the Long Duration Exposure Facility (LDEF). The Monte Carlo modeling produced credible agreement with space results of increased Ey for all samples with spacers as well as predicting the space-observed enhancement in erosion near the edges of samples due to scattering from the beveled edges of the sample holders.

  7. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  8. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    PubMed

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  9. Quantum Monte Carlo Computations of the (Mg1-XFeX) SiO3 Perovskite to Post-perovskite Phase Boundary

    NASA Astrophysics Data System (ADS)

    Lin, Yangzheng; Cohen, R. E.; Floris, Andrea; Shulenburger, Luke; Driver, Kevin P.

    We have computed total energies of FeSiO3 and MgSiO3[1 ] perovskite and post-perovskite using diffusion Monte Carlo with the qmcpack GPU code. In conjunction with DFT +U computations for intermediate compositions (Mg1-XFeX) SiO3 and phonons computed using density functional perturbation theory (DFPT) with the pwscf code, we have derived the chemical potentials of perovskite (Pv) and post-perovskite (PPv) (Mg1-XFeX) SiO3 and computed the binary phase diagram versus P, T, and X using a non-ideal solid solution model. The finite temperature effects were considered within quasi-harmonic approximation (QHA). Our results show that ferrous iron stabilizes PPv and lowers the Pv-PPv transition pressure, which is consistent with previous theoretical and some experimental studies. We will discuss the correlation between the Earth's D'' layer and the Pv to PPv phase boundary. Computations were performed on XSEDE machines, and on the Oak Ridge Leadership Computing Facility (OLCF) machine Titan under project CPH103geo of INCITE program E-mail: rcohen@carnegiescience.edu; This work is supported by NSF.

  10. Performance of an ARC-enabled computing grid for ATLAS/LHC physics analysis and Monte Carlo production under realistic conditions

    NASA Astrophysics Data System (ADS)

    Samset, B. H.; Cameron, D.; Ellert, M.; Filipcic, A.; Gronager, M.; Kleist, J.; Maffioletti, S.; Ould-Saada, F.; Pajchel, K.; Read, A. L.; Taga, A.; ATLAS Collaboration

    2010-04-01

    A significant amount of the computing resources available to the ATLAS experiment at the LHC are connected via the ARC grid middleware. ATLAS ARC-enabled resources, which consist of both major computing centers at the Tier-1 level and lesser, local clusters at Tier-2 and 3 level, have shown excellent performance running heavy Monte Carlo (MC) production for the experiment. However, with the imminent arrival of LHC physics data, it is imperative that the deployed grid middlewares also can handle data access patterns caused by user-defined physics analysis. These user-defined jobs can have radically different demands than systematic, centrally controlled MC production. We report on the performance of the ARC middleware, as deployed for ATLAS, for realistic situations with concurrent MC production and physics analysis running on the same resources. Data access patterns for ATLAS MC and physics analysis grid jobs will be shown, together with the performance of various possible storage and file staging models.

  11. Preliminary TRIGA fuel burn-up evaluation by means of Monte Carlo code and computation based on total energy released during reactor operation

    SciTech Connect

    Borio Di Tigliole, A.; Bruni, J.; Panza, F.; Alloni, D.; Cagnazzo, M.; Magrotti, G.; Manera, S.; Prata, M.; Salvini, A.; Chiesa, D.; Clemenza, M.; Pattavina, L.; Previtali, E.; Sisti, M.; Cammi, A.

    2012-07-01

    Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)

  12. Space shuttle solid rocket booster recovery system definition. Volume 2: SRB water impact Monte Carlo computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.

  13. Three dimensional Monte Carlo simulation of molecular movement and heat radiation in vacuum devices: Computer code MOVAK3D

    NASA Astrophysics Data System (ADS)

    Class, G.

    1987-07-01

    A program to simulate gas motion and shine through of thermal radiation in fusion reactor vacuum flow channels was developed. The inner surface of the flow channel is described by plane areas (triangles, parallelograms) and by surfaces of revolution. By introducing control planes in the flow path, a variance reduction and shortening of the computation, respectively, are achieved through particle splitting and Russian roulette. The code is written in PL/I and verified using published data. Computer aided input of model data is performed interactively either under IBM-TSO or at a microprocessor (IBM PC-AT). The data files are exchangeable between the IBM-mainframe and IBM-PC computers. Both computers can produce plots of the elaborated channel model. For testing, the simulating computation can likewise be run interactively, whereas the production computation can be issued batchwise. The results of code verification are explained, and examples of channel models and of the interactive mode are given.

  14. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as

  15. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    SciTech Connect

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; DeMarco, John J.

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purpose of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain

  16. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  17. Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0

    SciTech Connect

    Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.

    1996-10-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.

  18. How the transition frequencies of microtubule dynamic instability (nucleation, catastrophe, and rescue) regulate microtubule dynamics in interphase and mitosis: analysis using a Monte Carlo computer simulation.

    PubMed Central

    Gliksman, N R; Skibbens, R V; Salmon, E D

    1993-01-01

    Microtubules (MTs) in newt mitotic spindles grow faster than MTs in the interphase cytoplasmic microtubule complex (CMTC), yet spindle MTs do not have the long lengths or lifetimes of the CMTC microtubules. Because MTs undergo dynamic instability, it is likely that changes in the durations of growth or shortening are responsible for this anomaly. We have used a Monte Carlo computer simulation to examine how changes in the number of MTs and changes in the catastrophe and rescue frequencies of dynamic instability may be responsible for the cell cycle dependent changes in MT characteristics. We used the computer simulations to model interphase-like or mitotic-like MT populations on the basis of the dynamic instability parameters available from newt lung epithelial cells in vivo. We started with parameters that produced MT populations similar to the interphase newt lung cell CMTC. In the simulation, increasing the number of MTs and either increasing the frequency of catastrophe or decreasing the frequency of rescue reproduced the changes in MT dynamics measured in vivo between interphase and mitosis. Images PMID:8298190

  19. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  20. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  1. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    SciTech Connect

    Burns, T.J.

    1994-03-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

  2. Sensitivity study of a large-scale air pollution model by using high-performance computations and Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.

    2013-10-01

    In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.

  3. MO-G-17A-04: Internal Dosimetric Calculations for Pediatric Nuclear Imaging Applications, Using Monte Carlo Simulations and High-Resolution Pediatric Computational Models

    SciTech Connect

    Papadimitroulas, P; Kagadis, GC; Loudos, G

    2014-06-15

    Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and

  4. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  5. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    NASA Astrophysics Data System (ADS)

    Bednarz, Bryan; Hancox, Cindy; Xu, X. George

    2009-09-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU

  6. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    PubMed Central

    Bednarz, Bryan; Hancox, Cindy; Xu, X George

    2012-01-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU

  7. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  8. Sensitivity analysis for liver iron measurement through neutron stimulated emission computed tomography: a Monte Carlo study in GEANT4

    NASA Astrophysics Data System (ADS)

    Agasthya, G. A.; Harrawood, B. C.; Shah, J. P.; Kapadia, A. J.

    2012-01-01

    Neutron stimulated emission computed tomography (NSECT) is being developed as a non-invasive imaging modality to detect and quantify iron overload in the human liver. NSECT uses gamma photons emitted by the inelastic interaction between monochromatic fast neutrons and iron nuclei in the body to detect and quantify the disease. Previous simulated and physical experiments with phantoms have shown that NSECT has the potential to accurately diagnose iron overload with reasonable levels of radiation dose. In this work, we describe the results of a simulation study conducted to determine the sensitivity of the NSECT system for hepatic iron quantification in patients of different sizes. A GEANT4 simulation of the NSECT system was developed with a human liver and two torso sizes corresponding to small and large patients. The iron concentration in the liver ranged between 0.5 and 20 mg g-1,In this paper all iron concentrations with units mg g-1 refer to wet weight concentrations. corresponding to clinically reported iron levels in iron-overloaded patients. High-purity germanium gamma detectors were simulated to detect the emitted gamma spectra, which were background corrected using suitable water phantoms and analyzed to determine the minimum detectable level (MDL) of iron and the sensitivity of the NSECT system. These analyses indicate that for a small patient (torso major axis = 30 cm) the MDL is 0.5 mg g-1 and sensitivity is ˜13 ± 2 Fe counts/mg/mSv and for a large patient (torso major axis = 40 cm) the values are 1 mg g-1 and ˜5 ± 1 Fe counts/mg/mSv, respectively. The results demonstrate that the MDL for both patient sizes lies within the clinically significant range for human iron overload.

  9. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  10. Dynamic 99mTc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael

    2013-05-01

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  11. Study of applied magnetic field magnetoplasmadynamic thrusters with particle-in-cell code with Monte Carlo collision. I. Computation methods and physical processes

    SciTech Connect

    Tang Haibin; Cheng Jiao; Liu Chang; York, Thomas M.

    2012-07-15

    A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.

  12. Study of applied magnetic field magnetoplasmadynamic thrusters with particle-in-cell code with Monte Carlo collision. I. Computation methods and physical processes

    NASA Astrophysics Data System (ADS)

    Tang, Hai-Bin; Cheng, Jiao; Liu, Chang; York, Thomas M.

    2012-07-01

    A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.

  13. Computer-based first-principles kinetic Monte Carlo simulation of polyethylene glycol degradation in aqueous phase UV/H2O2 advanced oxidation process.

    PubMed

    Guo, Xin; Minakata, Daisuke; Crittenden, John

    2014-09-16

    We have developed a computer-based first-principles kinetic Monte Carlo (CF-KMC) model to predict degradation mechanisms and fates of intermediates and byproducts produced from the degradation of polyethylene glycol (PEG) in the presence of hydrogen peroxide (UV/H2O2). The CF-KMC model is composed of a reaction pathway generator, a reaction rate constant estimator, and a KMC solver. The KMC solver is able to solve the predicted pathways successfully without solving ordinary differential equations. The predicted time-dependent profiles of averaged molecular weight, and polydispersitivity index (i.e., the ratio of the weight-averaged molecular weight to the number-averaged molecular weight) for the PEG degradation were validated with experimental observations. These predictions are consistent with the experimental data. The model provided detailed and quantitative insights into the time evolutions of molecular weight distribution and concentration profiles of low molecular weight products and functional groups. Our approach may be useful to predict the fates of degradation products for a wide range of complicated organic contaminants. PMID:25158613

  14. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    PubMed

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  15. A three-dimensional computed tomography-assisted Monte Carlo evaluation of ovoid shielding on the dose to the bladder and rectum in intracavitary radiotherapy for cervical cancer

    SciTech Connect

    Gifford, Kent A. . E-mail: kagifford@mail.mdanderson.org; Horton, John L.; Pelloski, Christopher E.; Jhingran, Anuja; Court, Laurence E.; Eifel, Patricia J.

    2005-10-01

    Purpose: To determine the effects of Fletcher Suit Delclos ovoid shielding on dose to the bladder and rectum during intracavitary radiotherapy for cervical cancer. Methods and Materials: The Monte Carlo method was used to calculate the dose in 12 patients receiving low-dose-rate intracavitary radiotherapy with both shielded and unshielded ovoids. Cumulative dose-difference surface histograms were computed for the bladder and rectum. Doses to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose were computed for the bladder and rectum with and without shielding. Results: Shielding affected dose to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose for the rectum (10.1% and 11.1% differences, respectively). Shielding did not have a major impact on the dose to the 2-cm{sup 3} and 5-cm{sup 3} volumes of highest dose for the bladder. The average dose reduction to 5% of the surface area of the bladder was 53 cGy. Reductions as large as 150 cGy were observed to 5% of the surface area of the bladder. The average dose reduction to 5% of the surface area of the rectum was 195 cGy. Reductions as large as 405 cGy were observed to 5% of the surface area of the rectum. Conclusions: Our data suggest that the ovoid shields can greatly reduce the radiation dose delivered to the rectum. We did not find the same degree of effect on the dose to the bladder. To calculate the dose accurately, however, the ovoid shields must be included in the dose model.

  16. Measurement and Monte Carlo simulation for energy- and intensity-modulated electron radiotherapy delivered by a computer-controlled electron multileaf collimator.

    PubMed

    Jin, Lihui; Eldib, Ahmed; Li, Jinsheng; Emam, Ismail; Fan, Jiajin; Wang, Lu; Ma, C-M

    2014-01-01

    The dosimetric advantage of modulated electron radiotherapy (MERT) has been explored by many investigators and is considered to be an advanced radiation therapy technique in the utilization of electrons. A computer-controlled electron multileaf collimator (MLC) prototype, newly designed to be added onto a Varian linac to deliver MERT, was investigated both experimentally and by Monte Carlo simulations. Four different electron energies, 6, 9, 12, and 15 MeV, were employed for this investigation. To ensure that this device was capable of delivering the electron beams properly, measurements were performed to examine the electron MLC (eMLC) leaf leakage and to determine the appropriate jaw positioning for an eMLC-shaped field in order to eliminate a secondary radiation peak that could otherwise appear outside of an intended radiation field in the case of inappropriate jaw positioning due to insufficient radiation blockage from the jaws. Phase space data were obtained by Monte Carlo (MC) simulation and recorded at the plane just above the jaws for each of the energies (6, 9, 12, and 15 MeV). As an input source, phase space data were used in MC dose calculations for various sizes of the eMLC shaped field (10 × 10 cm2, 3.4 × 3.4 cm2, and 2 × 2 cm2) with respect to a water phantom at source-to-surface distance (SSD) = 94 cm, while the jaws, eMLC leaves, and some accessories associated with the eMLC assembly as well were modeled as modifiers in the calculations. The calculated results were then compared with measurements from a water scanning system. The results showed that jaw settings with 5 mm margins beyond the field shaped by the eMLC were appropriate to eliminate the secondary radiation peak while not widening the beam penumbra; the eMLC leaf leakage measurements ranged from 0.3% to 1.8% for different energies based on in-phantom measurements, which should be quite acceptable for MERT. Comparisons between MC dose calculations and measurements showed agreement

  17. SU-E-I-02: A Framework to Perform Batch Simulations of Computational Voxel Phantoms to Study Organ Doses in Computed Tomography Using a Commercial Monte Carlo Software Package

    SciTech Connect

    Bujila, R; Nowik, P; Poludniowski, G

    2014-06-01

    Purpose: ImpactMC (CT Imaging, Erlangen, Germany) is a Monte Carlo (MC) software package that offers a GPU enabled, user definable and validated method for 3D dose distribution calculations for radiography and Computed Tomography (CT). ImpactMC, in and of itself, offers limited capabilities to perform batch simulations. The aim of this work was to develop a framework for the batch simulation of absorbed organ dose distributions from CT scans of computational voxel phantoms. Methods: The ICRP 110 adult Reference Male and Reference Female computational voxel phantoms were formatted into compatible input volumes for MC simulations. A Matlab (The MathWorks Inc., Natick, MA) script was written to loop through a user defined set of simulation parameters and 1) generate input files required for the simulation, 2) start the MC simulation, 3) segment the absorbed dose for organs in the simulated dose volume and 4) transfer the organ doses to a database. A demonstration of the framework is made where the glandular breast dose to the adult Reference Female phantom, for a typical Chest CT examination, is investigated. Results: A batch of 48 contiguous simulations was performed with variations in the total collimation and spiral pitch. The demonstration of the framework showed that the glandular dose to the right and left breast will vary depending on the start angle of rotation, total collimation and spiral pitch. Conclusion: The developed framework provides a robust and efficient approach to performing a large number of user defined MC simulations with computational voxel phantoms in CT (minimal user interaction). The resulting organ doses from each simulation can be accessed through a database which greatly increases the ease of analyzing the resulting organ doses. The framework developed in this work provides a valuable resource when investigating different dose optimization strategies in CT.

  18. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    NASA Astrophysics Data System (ADS)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  19. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  20. Extension of RPI-adult male and female computational phantoms to obese patients and a Monte Carlo study of the effect on CT imaging dose

    PubMed Central

    Ding, Aiping; Mille, Matthew M; Liu, Tianyu; Caracappa, Peter F; Xu, X George

    2012-01-01

    Although it is known that obesity has a profound effect on x-ray computed tomography (CT) image quality and patient organ dose, quantitative data describing this relationship are not currently available. This study examines the effect of obesity on the calculated radiation dose to organs and tissues from CT using newly developed phantoms representing overweight and obese patients. These phantoms were derived from the previously developed RPI-adult male and female computational phantoms. The result was a set of ten phantoms (five males, five females) with body mass indexes ranging from 23.5 (normal body weight) to 46.4 kg m−2 (morbidly obese). The phantoms were modeled using triangular mesh geometry and include specified amounts of the subcutaneous adipose tissue and visceral adipose tissue. The mesh-based phantoms were then voxelized and defined in the Monte Carlo N-Particle Extended code to calculate organ doses from CT imaging. Chest–abdomen–pelvis scanning protocols for a GE LightSpeed 16 scanner operating at 120 and 140 kVp were considered. It was found that for the same scanner operating parameters, radiation doses to organs deep in the abdomen (e.g., colon) can be up to 59% smaller for obese individuals compared to those of normal body weight. This effect was found to be less significant for shallow organs. On the other hand, increasing the tube potential from 120 to 140 kVp for the same obese individual resulted in increased organ doses by as much as 56% for organs within the scan field (e.g., stomach) and 62% for those out of the scan field (e.g., thyroid), respectively. As higher tube currents are often used for larger patients to maintain image quality, it was of interest to quantify the associated effective dose. It was found from this study that when the mAs was doubled for the obese level-I, obese level-II and morbidly-obese phantoms, the effective dose relative to that of the normal weight phantom increased by 57%, 42% and 23%, respectively

  1. Extension of RPI-adult male and female computational phantoms to obese patients and a Monte Carlo study of the effect on CT imaging dose

    NASA Astrophysics Data System (ADS)

    Ding, Aiping; Mille, Matthew M.; Liu, Tianyu; Caracappa, Peter F.; Xu, X. George

    2012-05-01

    Although it is known that obesity has a profound effect on x-ray computed tomography (CT) image quality and patient organ dose, quantitative data describing this relationship are not currently available. This study examines the effect of obesity on the calculated radiation dose to organs and tissues from CT using newly developed phantoms representing overweight and obese patients. These phantoms were derived from the previously developed RPI-adult male and female computational phantoms. The result was a set of ten phantoms (five males, five females) with body mass indexes ranging from 23.5 (normal body weight) to 46.4 kg m-2 (morbidly obese). The phantoms were modeled using triangular mesh geometry and include specified amounts of the subcutaneous adipose tissue and visceral adipose tissue. The mesh-based phantoms were then voxelized and defined in the Monte Carlo N-Particle Extended code to calculate organ doses from CT imaging. Chest-abdomen-pelvis scanning protocols for a GE LightSpeed 16 scanner operating at 120 and 140 kVp were considered. It was found that for the same scanner operating parameters, radiation doses to organs deep in the abdomen (e.g., colon) can be up to 59% smaller for obese individuals compared to those of normal body weight. This effect was found to be less significant for shallow organs. On the other hand, increasing the tube potential from 120 to 140 kVp for the same obese individual resulted in increased organ doses by as much as 56% for organs within the scan field (e.g., stomach) and 62% for those out of the scan field (e.g., thyroid), respectively. As higher tube currents are often used for larger patients to maintain image quality, it was of interest to quantify the associated effective dose. It was found from this study that when the mAs was doubled for the obese level-I, obese level-II and morbidly-obese phantoms, the effective dose relative to that of the normal weight phantom increased by 57%, 42% and 23%, respectively. This set

  2. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport

    SciTech Connect

    Lee, C; Badal, A

    2014-06-15

    Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.

  3. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  4. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  5. Monte Carlo computed machine-specific correction factors for reference dosimetry of TomoTherapy static beam for several ion chambers

    SciTech Connect

    Sterpin, E.; Mackie, T. R.; Vynckier, S.

    2012-07-15

    Purpose: To determine k{sub Q{sub m{sub s{sub r,Q{sub o}{sup f{sub m}{sub s}{sub r},f{sub o}}}}}} correction factors for machine-specific reference (msr) conditions by Monte Carlo (MC) simulations for reference dosimetry of TomoTherapy static beams for ion chambers Exradin A1SL, A12; PTW 30006, 31010 Semiflex, 31014 PinPoint, 31018 microLion; NE 2571. Methods: For the calibration of TomoTherapy units, reference conditions specified in current codes of practice like IAEA/TRS-398 and AAPM/TG-51 cannot be realized. To cope with this issue, Alfonso et al. [Med. Phys. 35, 5179-5186 (2008)] described a new formalism introducing msr factors k{sub Q{sub m{sub s{sub r,Q{sub o}{sup f{sub m}{sub s}{sub r},f{sub o}}}}}} for reference dosimetry, applicable to static TomoTherapy beams. In this study, those factors were computed directly using MC simulations for Q{sub 0} corresponding to a simplified {sup 60}Co beam in TRS-398 reference conditions (at 10 cm depth). The msr conditions were a 10 Multiplication-Sign 5 cm{sup 2} TomoTherapy beam, source-surface distance of 85 cm and 10 cm depth. The chambers were modeled according to technical drawings using the egs++ package and the MC simulations were run with the egs{sub c}hamber user code. Phase-space files used as the source input were produced using PENELOPE after simulation of a simplified {sup 60}Co beam and the TomoTherapy treatment head modeled according to technical drawings. Correlated sampling, intermediate phase-space storage, and photon cross-section enhancement variance reduction techniques were used. The simulations were stopped when the combined standard uncertainty was below 0.2%. Results: Computed k{sub Q{sub m{sub s{sub r,Q{sub o}{sup f{sub m}{sub s}{sub r},f{sub o}}}}}} values were all close to one, in a range from 0.991 for the PinPoint chamber to 1.000 for the Exradin A12 with a statistical uncertainty below 0.2%. Considering a beam quality Q defined as the TPR{sub 20,10} for a 6 MV Elekta photon beam (0

  6. A Monte Carlo simulation study of the effect of energy windows in computed tomography images based on an energy-resolved photon counting detector

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

    2012-08-01

    The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is

  7. TH-A-19A-07: The Effect of Particle Tracking Step Size Limit On Monte Carlo- Computed LET Spectrum of Therapeutic Proton Beams

    SciTech Connect

    Guan, F; Bronk, L; Kerr, M; Titt, U; Taleei, R; Mirkovic, D; Zhu, X; Grosshans, D; Mohan, R

    2014-06-15

    Purpose: To investigate the effect of charged particle tracking step size limit in the determination of the LET spectrum of therapeutic proton beams using Monte Carlo simulations. Methods: The LET spectra at different depths in a water phantom from a 79.7 MeV spot-scanning proton beam were calculated using Geant4. Five different tracking step limits 0.5 mm, 0.1 mm, 0.05 mm, 0.01 mm and 1 μm were adopted. The field size was set to 10×10 cm{sup 2} on the isocenter plane. A 40×40×6 cm{sup 3} water phantom was modelled as the irradiation target. The voxel size was set to 1×1×0.5 mm{sup 3} to obtain high resolution results. The LET spectra were scored ranging from 0.01 keV/μm to 10{sup 4}keV/μm in the logarithm scale. In addition, the proton energy spectra at different depths were also scored. Results: The LET spectra calculated using different step size limits were compared at four depths along the Bragg curve. At any depths, the spread of the LET spectra increases with the decrease of step size limit. In the dose buildup region (z = 1.9 cm) and in the region proximal to the Bragg peak (z = 3.95 cm), the frequency mean LET does not vary with decreasing step size limit. At Bragg peak (z = 4.75 cm) and in the distal edge (z = 4.85 cm), frequency mean LET decreases with decreasing step size limit. The energy spectrum at any specified depths does not vary with the step size limit. Conclusion: The calculated LET has a spectral distribution rather than a single value at any depths along the Bragg curve and the spread of the computed spectrum depends on the tracking step limit. Incorporating the LET spectrum distribution into the robust IMPT optimization plan may provide more accurate biological dose distribution than using the dose- or fluence-averaged LET. NIH Program Project Grant P01CA021239.

  8. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  9. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  10. Evaluation of cumulative dose for cone-beam computed tomography (CBCT) scans within phantoms made from different compositions using Monte Carlo simulations.

    PubMed

    Abuhaimed, Abdullah; Martin, Colin J; Sankaralingam, Marimuthu; Oomen, Kurian; Gentle, David J

    2015-01-01

    Measurement of cumulative dose ƒ(0,150) with a small ionization chamber within standard polymethyl methacrylate (PMMA) CT head and body phantoms, 150 mm in length, is a possible practical method for cone-beam computed tomography (CBCT) dosimetry. This differs from evaluating cumulative dose under scatter equilibrium conditions within an infinitely long phantom ƒ(0,∞), which is proposed by AAPM TG-111 for CBCT dosimetry. The aim of this study was to investigate the feasibility of using ƒ(0,150) to estimate values for ƒ(0,∞) in long head and body phantoms made of PMMA, polyethylene (PE), and water, using beam qualities for tube potentials of 80-140 kV. The study also investigated the possibility of using 150 mm PE phantoms for assessment of ƒ(0,∞) within long PE phantoms, the ICRU/AAPM phantom. The influence of scan parameters, composition, and length of the phantoms was investigated. The capability of ƒ(0,150) to assess ƒ(0,∞) has been defined as the efficiency and assessed in terms of the ratios ε(ƒ(0,150) / ƒ(0,∞)). The efficiencies were calculated using Monte Carlo simulations for an On-Board Imager (OBI) system mounted on a TrueBeam linear accelerator. Head and body scanning protocols with beams of width 40-500 mm were used. Efficiencies ε(PMMA/PMMA) and ε(PE/PE) as a function of beam width exhibited three separate regions. For beam widths < 150 mm, ε(PMMA/PMMA) and ε(PE/PE) values were greater than 90% for the head and body phantoms. The efficiency values then fell rapidly with increasing beam width before levelling off at 74% for ε(PMMA/PMMA) and 69% for ε(PE/PE) for a 500 mm beam width. The quantities ε(PMMA/PE) and ε(PMMA/Water) varied with beam width in a different manner. Values at the centers of the phantoms for narrow beams were lower and increased to a steady state for ~100-150 mm wide beams, before declining with increasing the beam width, whereas values at the peripheries decreased steadily with beam width. Results for ε

  11. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  12. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  13. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  14. SU-E-T-559: Monte Carlo Simulation of Cobalt-60 Teletherapy Unit Modeling In-Field and Out-Of-Field Doses for Applications in Computational Radiation Dosimetry

    SciTech Connect

    Petroccia, H; Bolch, W; Li, Z; Mendenhall, N

    2015-06-15

    Purpose: Mean organ doses from structures located in field and outside of field boundaries during radiotherapy treatment must be considered when looking at secondary effects. Treatment planning in patients with 40 years of follow-up does not include 3-D treatment planning images and did not estimate dose to structures out of the direct field. Therefore, it is of interest to correlate actual clinical events with doses received. Methods: Accurate models of radiotherapy machines combined with whole body computational phantoms using Monte Carlo methods allow for dose reconstructions intended for studies on late radiation effects. The Theratron-780 radiotherapy unit and anatomically realistic hybrid computational phantoms are modeled in the Monte Carlo radiation transport code MCNPX. The major components of the machine including the source capsule, lead in the unit-head, collimators (fixed/adjustable), and trimmer bars are simulated. The MCNPX transport code is used to compare calculated values in a water phantom with published data from BJR suppl. 25 for in-field doses and experimental data from AAPM Task Group No. 36 for out-of-field doses. Next, the validated cobalt-60 teletherapy model is combined with the UF/NCI Family of Reference Hybrid Computational Phantoms as a methodology for estimating organ doses. Results: The model of Theratron-780 has shown to be agree with percentage depth dose data within approximately 1% and for out of field doses the machine is shown to agree within 8.8%. Organ doses are reported for reference hybrid phantoms. Conclusion: Combining the UF/NCI Family of Reference Hybrid Computational Phantoms along with a validated model of the Theratron-780 allows for organ dose estimates of both in-field and out-of-field organs. By changing field size, position, and adding patient-specific blocking more complicated treatment set-ups can be recreated for patients treated historically, particularly those who lack both 2D/3D image sets.

  15. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  16. An assessment of the efficiency of methods for measurement of the computed tomography dose index (CTDI) for cone beam (CBCT) dosimetry by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Abuhaimed, Abdullah; Martin, Colin J.; Sankaralingam, Marimuthu; Gentle, David J.; McJury, Mark

    2014-10-01

    The IEC has introduced a practical approach to overcome shortcomings of the CTDI100 for measurements on wide beams employed for cone beam (CBCT) scans. This study evaluated the efficiency of this approach (CTDIIEC) for different arrangements using Monte Carlo simulation techniques, and compared CTDIIEC to the efficiency of CTDI100 for CBCT. Monte Carlo EGSnrc/BEAMnrc and EGSnrc/DOSXYZnrc codes were used to simulate the kV imaging system mounted on a Varian TrueBeam linear accelerator. The Monte Carlo model was benchmarked against experimental measurements and good agreement shown. Standard PMMA head and body phantoms with lengths 150, 600, and 900 mm were simulated. Beam widths studied ranged from 20-300 mm, and four scanning protocols using two acquisition modes were utilized. The efficiency values were calculated at the centre (ɛc) and periphery (ɛp) of the phantoms and for the weighted CTDI (ɛw). The efficiency values for CTDI100 were approximately constant for beam widths 20-40 mm, where ɛc(CTDI100), ɛp(CTDI100), and ɛw(CTDI100) were 74.7  ±  0.6%, 84.6  ±  0.3%, and 80.9  ±  0.4%, for the head phantom and 59.7  ±  0.3%, 82.1  ±  0.3%, and 74.9  ±  0.3%, for the body phantom, respectively. When beam width increased beyond 40 mm, ɛ(CTDI100) values fell steadily reaching ~30% at a beam width of 300 mm. In contrast, the efficiency of the CTDIIEC was approximately constant over all beam widths, demonstrating its suitability for assessment of CBCT. ɛc(CTDIIEC), ɛp(CTDIIEC), and ɛw(CTDIIEC) were 76.1  ±  0.9%, 85.9  ±  1.0%, and 82.2  ±  0.9% for the head phantom and 60.6  ±  0.7%, 82.8  ±  0.8%, and 75.8  ±  0.7%, for the body phantom, respectively, within 2% of ɛ(CTDI100) values for narrower beam widths. CTDI100,w and CTDIIEC,w underestimate CTDI∞,w by ~55% and ~18% for the head phantom and by ~56% and ~24% for the body phantom, respectively, using a clinical beam width 198 mm. The

  17. Product gas evolution above planar microstructured model catalysts--A combined scanning mass spectrometry, Monte Carlo, and Computational Fluid Dynamics study

    SciTech Connect

    Roos, M.; Bansmann, J.; Behm, R. J.; Zhang, D.; Deutschmann, O.

    2010-09-07

    The transport and distribution of reaction products above catalytically active Pt microstructures was studied by spatially resolved scanning mass spectrometry (SMS) in combination with Monte Carlo simulation and fluid dynamics calculations, using the oxidation of CO as test reaction. The spatial gas distribution above the Pt fields was measured via a thin quartz capillary connected to a mass spectrometer. Measurements were performed in two different pressure regimes, being characteristic for ballistic mass transfer and diffusion involving multiple collisions for the motion of CO{sub 2} product molecules between the sample and the capillary tip, and using differently sized and shaped Pt microstructures. The tip height dependent lateral resolution of the SMS measurements as well as contributions from shadowing effects, due to the mass transport limitations between capillary tip and sample surface at close separations, were evaluated and analyzed. The data allow to define measurement and reaction conditions where effects induced by the capillary tip can be neglected (''minimal invasive measurements'') and provide a basis for the evaluation of catalyst activities on microstructured model systems, e.g., for catalyst screening or studies of transport effects.

  18. RMC_POT: a computer code for reverse Monte Carlo modeling the structure of disordered systems containing molecules of arbitrary complexity.

    PubMed

    Gereben, Orsolya; Pusztai, László

    2012-11-01

    An approach has been devised and tested for preserving the molecular dynamics molecular geometry taking into account energetic considerations during Reverse Monte Carlo (RMC) modeling. Instead of the commonly used fixed neighbor constraints, where molecules are held together by constraining distance ranges available for the specified atom pairs, here molecules are kept together via bond, angle, and dihedral potential energies. The scaled total potential energy contributes to the measure of the goodness-of-fit, thus, the atoms can be prevented from drifting apart. In some of the calculations (Lennard-Jones and Coulombic) nonbonding potentials were also applied. The algorithm was successfully tested for the X-ray structure factor-based structure study of liquid dimethyl trisulfide, for which material now significantly more sensible results have been obtained than during previous attempts via any earlier version of RMC modeling. It is envisaged that structural modeling of a large class of materials, primarily liquids and amorphous solids containing molecules of up to about 100 atoms, will make use of the new code in the near future. PMID:22782785

  19. Specific and Non-Specific Protein Association in Solution: Computation of Solvent Effects and Prediction of First-Encounter Modes for Efficient Configurational Bias Monte Carlo Simulations

    PubMed Central

    Cardone, Antonio; Pant, Harish; Hassan, Sergio A.

    2013-01-01

    Weak and ultra-weak protein-protein association play a role in molecular recognition, and can drive spontaneous self-assembly and aggregation. Such interactions are difficult to detect experimentally, and are a challenge to the force field and sampling technique. A method is proposed to identify low-population protein-protein binding modes in aqueous solution. The method is designed to identify preferential first-encounter complexes from which the final complex(es) at equilibrium evolves. A continuum model is used to represent the effects of the solvent, which accounts for short- and long-range effects of water exclusion and for liquid-structure forces at protein/liquid interfaces. These effects control the behavior of proteins in close proximity and are optimized based on binding enthalpy data and simulations. An algorithm is described to construct a biasing function for self-adaptive configurational-bias Monte Carlo of a set of interacting proteins. The function allows mixing large and local changes in the spatial distribution of proteins, thereby enhancing sampling of relevant microstates. The method is applied to three binary systems. Generalization to multiprotein complexes is discussed. PMID:24044772

  20. Product gas evolution above planar microstructured model catalysts—A combined scanning mass spectrometry, Monte Carlo, and Computational Fluid Dynamics study

    NASA Astrophysics Data System (ADS)

    Roos, M.; Bansmann, J.; Zhang, D.; Deutschmann, O.; Behm, R. J.

    2010-09-01

    The transport and distribution of reaction products above catalytically active Pt microstructures was studied by spatially resolved scanning mass spectrometry (SMS) in combination with Monte Carlo simulation and fluid dynamics calculations, using the oxidation of CO as test reaction. The spatial gas distribution above the Pt fields was measured via a thin quartz capillary connected to a mass spectrometer. Measurements were performed in two different pressure regimes, being characteristic for ballistic mass transfer and diffusion involving multiple collisions for the motion of CO2 product molecules between the sample and the capillary tip, and using differently sized and shaped Pt microstructures. The tip height dependent lateral resolution of the SMS measurements as well as contributions from shadowing effects, due to the mass transport limitations between capillary tip and sample surface at close separations, were evaluated and analyzed. The data allow to define measurement and reaction conditions where effects induced by the capillary tip can be neglected ("minimal invasive measurements") and provide a basis for the evaluation of catalyst activities on microstructured model systems, e.g., for catalyst screening or studies of transport effects.

  1. Using a Monte Carlo approach to evaluate seawater intrusion in the Oristano coastal aquifer: A case study from the AQUAGRID collaborative computing platform

    NASA Astrophysics Data System (ADS)

    Lecca, Giuditta; Cau, Pierluigi

    Uncertainties in the physical parameters of a groundwater system, due to the lack of direct access to the subsurface, strongly affect the design of water management policies, so that the risk of mismanagement becomes a critical factor in complex ecological and economic analyses. Stochastic modeling may help provide uncertainty quantification and also add robustness to the analysis by means of probabilistic forecasts. In this study a stochastic approach has been employed to model hydraulic conductivity of a confining formation in a multi-layered coastal aquifer system, under conditions of uncertainty. A Monte Carlo simulation, based on a coupled flow and transport groundwater 3D model, has been carried out to propagate the hydraulic conductivity parameter uncertainty to groundwater model outputs, namely pressure head and salt concentration. The aim of the study is to assess the risk of seawater intrusion into the aquifer by means of probabilistic threshold analysis on the simulated groundwater concentrations for different aquifer exploitation schemes. Maximum difference for nodal concentrations, with reference to homogeneous aquitard configuration, was found equal to 95%, proving how important can be the impact of the spatial variability of the hydraulic conductivity of the confining layer on the simulated salt concentrations. Such analysis enables to take better decisions about the management of the groundwater resource and to make additional field investigations consistent with environmental protection. The application workflow, based on the integration of both in-house developed and public domain software tools with hydrogeological data, has been deployed on a problem solving Grid platform (http://grida3.crs4.it). Further developments will include the planning of cost-effective additional field data acquisition based on the outcome of the stochastic model.

  2. Computation of Ion Charge State Distributions After Inner-Shell Ionization In Ne, Ar And Kr Atoms Using Monte Carlo Simulation

    SciTech Connect

    Mohammedein, Adel M.; Ghoneim, Adel A.; Al-Zanki, Jasem M.; El-Essawy, Ashraf H.

    2010-01-05

    Atomic reorganization starts by filling the initially inner-shell vacancy by a radiative transition (x-ray) or by a non-radiative transition (Auger and Coster-Kronig processes). New vacancies created during this atomic reorganization may in turn be filled by further radiative and non-radiative transitions until all vacancies reach the outermost occupied shells. The production of inner-shell vacancy in an atom and the de-excitation decays through radiative and non-radiative transitions may result in a change of the atomic potential; this change leads to the emission of an additional electron in the continuum (electron shake-off processes). In the present work, the ion charge state distributions (CSD) and mean atomic charge ions produced from inner-shell vacancy de-excitation decay are calculated for neutral Ne, Ar and Kr atoms. The calculations are carried out using Monte Carlo (MC) technique to simulate the cascade development after primary vacancy production. The radiative and non-radiative transitions for each vacancy are calculated in the simulation. In addition, the change of transition energies and transition rates due to multi vacancies produced in the atomic configurations through the cascade development are considered in the present work. It is found that considering the electron shake--off process and closing of non-allowed non-radiative channels improves the results of both charge state distributions (CSD) and average charge state. To check the validity of the present calculations, the results obtained are compared with available theoretical and experimental data. The present results are found to agree well with the available theoretical and experimental values.

  3. Dosimetry of a cone beam CT device for oral and maxillofacial radiology using Monte Carlo techniques and ICRP adult reference computational phantoms

    PubMed Central

    Morant, JJ; Salvadó, M; Hernández-Girón, I; Casanovas, R; Ortega, R; Calzado, A

    2013-01-01

    Objectives: The aim of this study was to calculate organ and effective doses for a range of available protocols in a particular cone beam CT (CBCT) scanner dedicated to dentistry and to derive effective dose conversion factors. Methods: Monte Carlo simulations were used to calculate organ and effective doses using the International Commission on Radiological Protection voxel adult male and female reference phantoms (AM and AF) in an i-CAT CBCT. Nine different fields of view (FOVs) were simulated considering full- and half-rotation modes, and also a high-resolution acquisition for a particular protocol. Dose–area product (DAP) was measured. Results: Dose to organs varied for the different FOVs, usually being higher in the AF phantom. For 360°, effective doses were in the range of 25–66 μSv, and 46 μSv for full head. Higher contributions to the effective dose corresponded to the remainder (31%; 27–36 range), salivary glands (23%; 20–29%), thyroid (13%; 8–17%), red bone marrow (10%; 9–11%) and oesophagus (7%; 4–10%). The high-resolution protocol doubled the standard resolution doses. DAP values were between 181 mGy cm2 and 556 mGy cm2 for 360°. For 180° protocols, dose to organs, effective dose and DAP were approximately 40% lower. A conversion factor (DAP to effective dose) of 0.130 ± 0.006 μSv mGy−1 cm−2 was derived for all the protocols, excluding full head. A wide variation in dose to eye lens and thyroid was found when shifting the FOV in the AF phantom. Conclusions: Organ and effective doses varied according to field size, acquisition angle and positioning of the beam relative to radiosensitive organs. Good positive correlation between calculated effective dose and measured DAP was found. PMID:22933532

  4. Dosimetric accuracy of a deterministic radiation transport based {sup 192}Ir brachytherapy treatment planning system. Part III. Comparison to Monte Carlo simulation in voxelized anatomical computational models

    SciTech Connect

    Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.

    2013-01-15

    Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.

  5. A Monte Carlo Simulation Investigating the Validity and Reliability of Ability Estimation in Item Response Theory with Speeded Computer Adaptive Tests

    ERIC Educational Resources Information Center

    Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M.

    2010-01-01

    Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…

  6. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  7. ARCHERRT – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    PubMed Central

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George

    2014-01-01

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified

  8. Investigation of practical approaches to evaluating cumulative dose for cone beam computed tomography (CBCT) from standard CT dosimetry measurements: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Abuhaimed, Abdullah; Martin, Colin J.; Sankaralingam, Marimuthu; Gentle, David J.

    2015-07-01

    A function called Gx(L) was introduced by the International Commission on Radiation Units and Measurements (ICRU) Report-87 to facilitate measurement of cumulative dose for CT scans within long phantoms as recommended by the American Association of Physicists in Medicine (AAPM) TG-111. The Gx(L) function is equal to the ratio of the cumulative dose at the middle of a CT scan to the volume weighted CTDI (CTDIvol), and was investigated for conventional multi-slice CT scanners operating with a moving table. As the stationary table mode, which is the basis for cone beam CT (CBCT) scans, differs from that used for conventional CT scans, the aim of this study was to investigate the extension of the Gx(L) function to CBCT scans. An On-Board Imager (OBI) system integrated with a TrueBeam linac was simulated with Monte Carlo EGSnrc/BEAMnrc, and the absorbed dose was calculated within PMMA, polyethylene (PE), and water head and body phantoms using EGSnrc/DOSXYZnrc, where the body PE body phantom emulated the ICRU/AAPM phantom. Beams of width 40-500 mm and beam qualities at tube potentials of 80-140 kV were studied. Application of a modified function of beam width (W) termed Gx(W), for which the cumulative dose for CBCT scans f (0) is normalized to the weighted CTDI (CTDIw) for a reference beam of width 40 mm, was investigated as a possible option. However, differences were found in Gx(W) with tube potential, especially for body phantoms, and these were considered to be due to differences in geometry between wide beams used for CBCT scans and those for conventional CT. Therefore, a modified function Gx(W)100 has been proposed, taking the form of values of f (0) at each position in a long phantom, normalized with respect to dose indices f 100(150)x measured with a 100 mm pencil ionization chamber within standard 150 mm PMMA phantoms, using the same scanning parameters, beam widths and positions within the phantom. f 100(150)x averages the dose resulting from

  9. Investigation of practical approaches to evaluating cumulative dose for cone beam computed tomography (CBCT) from standard CT dosimetry measurements: a Monte Carlo study.

    PubMed

    Abuhaimed, Abdullah; Martin, Colin J; Sankaralingam, Marimuthu; Gentle, David J

    2015-07-21

    A function called Gx(L) was introduced by the International Commission on Radiation Units and Measurements (ICRU) Report-87 to facilitate measurement of cumulative dose for CT scans within long phantoms as recommended by the American Association of Physicists in Medicine (AAPM) TG-111. The Gx(L) function is equal to the ratio of the cumulative dose at the middle of a CT scan to the volume weighted CTDI (CTDIvol), and was investigated for conventional multi-slice CT scanners operating with a moving table. As the stationary table mode, which is the basis for cone beam CT (CBCT) scans, differs from that used for conventional CT scans, the aim of this study was to investigate the extension of the Gx(L) function to CBCT scans. An On-Board Imager (OBI) system integrated with a TrueBeam linac was simulated with Monte Carlo EGSnrc/BEAMnrc, and the absorbed dose was calculated within PMMA, polyethylene (PE), and water head and body phantoms using EGSnrc/DOSXYZnrc, where the body PE body phantom emulated the ICRU/AAPM phantom. Beams of width 40-500 mm and beam qualities at tube potentials of 80-140 kV were studied. Application of a modified function of beam width (W) termed Gx(W), for which the cumulative dose for CBCT scans f (0) is normalized to the weighted CTDI (CTDIw) for a reference beam of width 40 mm, was investigated as a possible option. However, differences were found in Gx(W) with tube potential, especially for body phantoms, and these were considered to be due to differences in geometry between wide beams used for CBCT scans and those for conventional CT. Therefore, a modified function Gx(W)100 has been proposed, taking the form of values of f (0) at each position in a long phantom, normalized with respect to dose indices f 100(150)x measured with a 100 mm pencil ionization chamber within standard 150 mm PMMA phantoms, using the same scanning parameters, beam widths and positions within the phantom. f 100(150)x averages the dose resulting from

  10. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  11. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  12. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  13. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  14. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  15. Monte Carlo techniques for analyzing deep-penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-02-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications.

  16. Recent advances and future prospects for Monte Carlo

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  17. Quantum Monte Carlo : not just for energy levels.

    SciTech Connect

    Nollett, K. M.; Physics

    2007-01-01

    Quantum Monte Carlo and realistic interactions can provide well-motivated vertices and overlaps for DWBA analyses of reactions. Given an interaction in vaccum, there are several computational approaches to nuclear systems, as you have been hearing: No-core shell model with Lee-Suzuki or Bloch-Horowitz for Hamiltonian Coupled clusters with G-matrix interaction Density functional theory, granted an energy functional derived from the interaction Quantum Monte Carlo - Variational Monte Carlo Green's function Monte Carlo. The last two work directly with a bare interaction and bare operators and describe the wave function without expanding in basis functions, so they have rather different sets of advantages and disadvantages from the others. Variational Monte Carlo (VMC) is built on a sophisticated Ansatz for the wave function, built on shell model like structure modified by operator correlations. Green's function Monte Carlo (GFMC) uses an operator method to project the true ground state out of a reasonable guess wave function.

  18. Improving x-ray fluorescence signal for benchtop polychromatic cone-beam x-ray fluorescence computed tomography by incident x-ray spectrum optimization: A Monte Carlo study

    SciTech Connect

    Manohar, Nivedh; Cho, Sang Hyun

    2014-10-15

    Purpose: To develop an accurate and comprehensive Monte Carlo (MC) model of an experimental benchtop polychromatic cone-beam x-ray fluorescence computed tomography (XFCT) setup and apply this MC model to optimize incident x-ray spectrum for improving production/detection of x-ray fluorescence photons from gold nanoparticles (GNPs). Methods: A detailed MC model, based on an experimental XFCT system, was created using the Monte Carlo N-Particle (MCNP) transport code. The model was validated by comparing MC results including x-ray fluorescence (XRF) and scatter photon spectra with measured data obtained under identical conditions using 105 kVp cone-beam x-rays filtered by either 1 mm of lead (Pb) or 0.9 mm of tin (Sn). After validation, the model was used to investigate the effects of additional filtration of the incident beam with Pb and Sn. Supplementary incident x-ray spectra, representing heavier filtration (Pb: 2 and 3 mm; Sn: 1, 2, and 3 mm) were computationally generated and used with the model to obtain XRF/scatter spectra. Quasimonochromatic incident x-ray spectra (81, 85, 90, 95, and 100 keV with 10 keV full width at half maximum) were also investigated to determine the ideal energy for distinguishing gold XRF signal from the scatter background. Fluorescence signal-to-dose ratio (FSDR) and fluorescence-normalized scan time (FNST) were used as metrics to assess results. Results: Calculated XRF/scatter spectra for 1-mm Pb and 0.9-mm Sn filters matched (r ≥ 0.996) experimental measurements. Calculated spectra representing additional filtration for both filter materials showed that the spectral hardening improved the FSDR at the expense of requiring a much longer FNST. In general, using Sn instead of Pb, at a given filter thickness, allowed an increase of up to 20% in FSDR, more prominent gold XRF peaks, and up to an order of magnitude decrease in FNST. Simulations using quasimonochromatic spectra suggested that increasing source x-ray energy, in the

  19. Improving x-ray fluorescence signal for benchtop polychromatic cone-beam x-ray fluorescence computed tomography by incident x-ray spectrum optimization: A Monte Carlo study

    PubMed Central

    Manohar, Nivedh; Jones, Bernard L.; Cho, Sang Hyun

    2014-01-01

    Purpose: To develop an accurate and comprehensive Monte Carlo (MC) model of an experimental benchtop polychromatic cone-beam x-ray fluorescence computed tomography (XFCT) setup and apply this MC model to optimize incident x-ray spectrum for improving production/detection of x-ray fluorescence photons from gold nanoparticles (GNPs). Methods: A detailed MC model, based on an experimental XFCT system, was created using the Monte Carlo N-Particle (MCNP) transport code. The model was validated by comparing MC results including x-ray fluorescence (XRF) and scatter photon spectra with measured data obtained under identical conditions using 105 kVp cone-beam x-rays filtered by either 1 mm of lead (Pb) or 0.9 mm of tin (Sn). After validation, the model was used to investigate the effects of additional filtration of the incident beam with Pb and Sn. Supplementary incident x-ray spectra, representing heavier filtration (Pb: 2 and 3 mm; Sn: 1, 2, and 3 mm) were computationally generated and used with the model to obtain XRF/scatter spectra. Quasimonochromatic incident x-ray spectra (81, 85, 90, 95, and 100 keV with 10 keV full width at half maximum) were also investigated to determine the ideal energy for distinguishing gold XRF signal from the scatter background. Fluorescence signal-to-dose ratio (FSDR) and fluorescence-normalized scan time (FNST) were used as metrics to assess results. Results: Calculated XRF/scatter spectra for 1-mm Pb and 0.9-mm Sn filters matched (r ≥ 0.996) experimental measurements. Calculated spectra representing additional filtration for both filter materials showed that the spectral hardening improved the FSDR at the expense of requiring a much longer FNST. In general, using Sn instead of Pb, at a given filter thickness, allowed an increase of up to 20% in FSDR, more prominent gold XRF peaks, and up to an order of magnitude decrease in FNST. Simulations using quasimonochromatic spectra suggested that increasing source x-ray energy, in the

  20. Monte Carlo simulation of an expanding gas

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1989-01-01

    By application of simple computer graphics techniques, the statistical performance of two Monte Carlo methods used in the simulation of rarefied gas flows are assessed. Specifically, two direct simulation Monte Carlo (DSMC) methods developed by Bird and Nanbu are considered. The graphics techniques are found to be of great benefit in the reduction and interpretation of the large volume of data generated, thus enabling important conclusions to be drawn about the simulation results. Hence, it is discovered that the method of Nanbu suffers from increased statistical fluctuations, thereby prohibiting its use in the solution of practical problems.

  1. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, S.C.

    1998-12-01

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed. {copyright} {ital 1998 American Institute of Physics.}

  2. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, Steven C.

    1998-12-21

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  3. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.

    1998-08-25

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H, {sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  4. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  5. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  6. Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program

    ERIC Educational Resources Information Center

    Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.

    2004-01-01

    The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…

  7. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  8. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  9. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.; Physics

    2008-01-01

    Variational Monte Carlo and Green's function Monte Carlo are powerful tools for cal- culations of properties of light nuclei using realistic two-nucleon (NN) and three-nucleon (NNN) potentials. Recently the GFMC method has been extended to multiple states with the same quantum numbers. The combination of the Argonne v18 two-nucleon and Illinois-2 three-nucleon potentials gives a good prediction of many energies of nuclei up to 12 C. A number of other recent results are presented: comparison of binding energies with those obtained by the no-core shell model; the incompatibility of modern nuclear Hamiltonians with a bound tetra-neutron; difficulties in computing RMS radii of very weakly bound nuclei, such as 6He; center-of-mass effects on spectroscopic factors; and the possible use of an artificial external well in calculations of neutron-rich isotopes.

  10. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  11. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  12. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  13. Monte Carlo techniques for analyzing deep penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  14. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  15. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  16. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  17. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGESBeta

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  18. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  19. Quantum Monte Carlo calculations for light nuclei.

    SciTech Connect

    Wiringa, R. B.

    1998-10-23

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  20. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  1. Modulated pulse bathymetric lidar Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia

    2015-10-01

    A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.

  2. Monte Carlo analysis of magnetic aftereffect phenomena

    NASA Astrophysics Data System (ADS)

    Andrei, Petru; Stancu, Alexandru

    2006-04-01

    Magnetic aftereffect phenomena are analyzed by using the Monte Carlo technique. This technique has the advantage that it can be applied to any model of hysteresis. It is shown that a log t-type dependence of the magnetization can be qualitatively predicted even in the framework of hysteresis models with local history, such as the Jiles-Atherton model. These models are computationally much more efficient than the models with global history such as the Preisach model. Numerical results related to the decay of the magnetization as of function of time, as well as to the viscosity coefficient, are presented.

  3. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  4. Monte Carlo Simulation of Emission Tomography and other Medical Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Harrison, Robert L.

    2010-01-01

    As an introduction to Monte Carlo simulation of emission tomography, this paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.

  5. Applications of the Monte Carlo radiation transport toolkit at LLNL

    NASA Astrophysics Data System (ADS)

    Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine

    1999-09-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

  6. Ordinal Hypothesis in ANOVA Designs: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Braver, Sanford L.; Sheets, Virgil L.

    Numerous designs using analysis of variance (ANOVA) to test ordinal hypotheses were assessed using a Monte Carlo simulation. Each statistic was computed on each of over 10,000 random samples drawn from a variety of population conditions. The number of groups, population variance, and patterns of population means were varied. In the non-null…

  7. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  8. Innovation Lecture Series - Carlos Dominguez

    NASA Video Gallery

    Carlos Dominguez is a Senior Vice President at Cisco Systems and a technology evangelist, speaking to and motivating audiences worldwide about how technology is changing how we communicate, collabo...

  9. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  10. Carlos Chagas: biographical sketch.

    PubMed

    Moncayo, Alvaro

    2010-01-01

    Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world

  11. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  12. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  13. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  14. Direct Simulation Monte Carlo: Recent Advances and Applications

    NASA Astrophysics Data System (ADS)

    Oran, E. S.; Oh, C. K.; Cybyk, B. Z.

    The principles of and procedures for implementing direct simulation Monte Carlo (DSMC) are described. Guidelines to inherent and external errors common in DSMC applications are provided. Three applications of DSMC to transitional and nonequilibrium flows are considered: rarefied atmospheric flows, growth of thin films, and microsystems. Selected new, potentially important advances in DSMC capabilities are described: Lagrangian DSMC, optimization on parallel computers, and hybrid algorithms for computations in mixed flow regimes. Finally, the limitations of current computer technology for using DSMC to compute low-speed, high-Knudsen-number flows are outlined as future challenges.

  15. Computer Series, 97.

    ERIC Educational Resources Information Center

    Kay, Jack G.; And Others

    1988-01-01

    Describes two applications of the microcomputer for laboratory exercises. Explores radioactive decay using the Batemen equations on a Macintosh computer. Provides examples and screen dumps of data. Investigates polymer configurations using a Monte Carlo simulation on an IBM personal computer. (MVL)

  16. Modification of codes NUALGAM and BREMRAD. Volume 3: Statistical considerations of the Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Firstenberg, H.

    1971-01-01

    The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.

  17. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  18. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  19. Overview of the MCU Monte Carlo Software Package

    NASA Astrophysics Data System (ADS)

    Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

    2014-06-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

  20. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  1. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  2. Monte Carlo without chains

    SciTech Connect

    Chorin, Alexandre J.

    2007-12-12

    A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.

  3. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter

  4. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  5. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  6. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  7. Grid Computing

    NASA Astrophysics Data System (ADS)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  8. TOPICAL REVIEW: Monte Carlo modelling of external radiotherapy photon beams

    NASA Astrophysics Data System (ADS)

    Verhaegen, Frank; Seuntjens, Jan

    2003-11-01

    An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources.

  9. Monte Carlo modelling of external radiotherapy photon beams.

    PubMed

    Verhaegen, Frank; Seuntjens, Jan

    2003-11-01

    An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources. PMID:14653555

  10. Monte Carlo treatment planning for photon and electron beams

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; van der Marck, S. C.; Schaart, D. R.; Van der Zee, W.; Van Vliet-Vroegindeweij, C.; Tomsej, M.; Jansen, J.; Heijmen, B.; Coghe, M.; De Wagter, C.

    2007-04-01

    During the last few decades, accuracy in photon and electron radiotherapy has increased substantially. This is partly due to enhanced linear accelerator technology, providing more flexibility in field definition (e.g. the usage of computer-controlled dynamic multileaf collimators), which led to intensity modulated radiotherapy (IMRT). Important improvements have also been made in the treatment planning process, more specifically in the dose calculations. Originally, dose calculations relied heavily on analytic, semi-analytic and empirical algorithms. The more accurate convolution/superposition codes use pre-calculated Monte Carlo dose "kernels" partly accounting for tissue density heterogeneities. It is generally recognized that the Monte Carlo method is able to increase accuracy even further. Since the second half of the 1990s, several Monte Carlo dose engines for radiotherapy treatment planning have been introduced. To enable the use of a Monte Carlo treatment planning (MCTP) dose engine in clinical circumstances, approximations have been introduced to limit the calculation time. In this paper, the literature on MCTP is reviewed, focussing on patient modeling, approximations in linear accelerator modeling and variance reduction techniques. An overview of published comparisons between MC dose engines and conventional dose calculations is provided for phantom studies and clinical examples, evaluating the added value of MCTP in the clinic. An overview of existing Monte Carlo dose engines and commercial MCTP systems is presented and some specific issues concerning the commissioning of a MCTP system are discussed.

  11. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  12. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  13. Monte Carlo methods in lattice gauge theories

    SciTech Connect

    Otto, S.W.

    1983-01-01

    The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.

  14. Monte Carlo methods: Application to hydrogen gas and hard spheres

    NASA Astrophysics Data System (ADS)

    Dewing, Mark Douglas

    2001-08-01

    Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied to a system of boson hard spheres to get exact, infinite system size results for the ground state at several densities. The kinds of problems that can be simulated with Monte Carlo methods are expanded through the development of new algorithms for combining a QMC simulation with a classical Monte Carlo simulation, which we call Coupled Electronic-Ionic Monte Carlo (CEIMC). The new CEIMC method is applied to a system of molecular hydrogen at temperatures ranging from 2800K to 4500K and densities from 0.25 to 0.46 g/cm3. VMC requires optimizing a parameterized wave function to find the minimum energy. We examine several techniques for optimizing VMC wave functions, focusing on the ability to optimize parameters appearing in the Slater determinant. Classical Monte Carlo simulations use an empirical interatomic potential to compute equilibrium properties of various states of matter. The CEIMC method replaces the empirical potential with a QMC calculation of the electronic energy. This is similar in spirit to the Car-Parrinello technique, which uses Density Functional Theory for the electrons and molecular dynamics for the nuclei. The challenges in constructing an efficient CEIMC simulation center mostly around the noisy results generated from the QMC computations of the electronic energy. We introduce two complementary techniques, one for tolerating the noise and the other for reducing it. The penalty method modifies the Metropolis acceptance ratio to tolerate noise without introducing a bias in the simulation of the nuclei. For reducing the noise, we introduce the two-sided energy

  15. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  16. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  17. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  18. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    SciTech Connect

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  19. ARCHER{sub RT} – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    SciTech Connect

    Su, Lin; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond

    2014-07-15

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER{sub RT} is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER{sub RT}. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER{sub RT} and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER{sub RT} agree well with DOSXYZnrc. For clinical cases, results from ARCHER{sub RT} are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to

  20. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  1. Efficient, Automated Monte Carlo Methods for Radiation Transport

    PubMed Central

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2012-01-01

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  2. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  3. Monte Carlo approach to nuclei and nuclear matter

    SciTech Connect

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-13

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.

  4. A multicomb variance reduction scheme for Monte Carlo semiconductor simulators

    SciTech Connect

    Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.

    1998-04-01

    The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.

  5. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  6. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  7. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  8. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  9. A Monte Carlo multimodal inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Maraschini, Margherita; Foti, Sebastiano

    2010-09-01

    The analysis of surface wave propagation is often used to estimate the S-wave velocity profile at a site. In this paper, we propose a stochastic approach for the inversion of surface waves, which allows apparent dispersion curves to be inverted. The inversion method is based on the integrated use of two-misfit functions. A misfit function based on the determinant of the Haskell-Thomson matrix and a classical Euclidean distance between the dispersion curves. The former allows all the modes of the dispersion curve to be taken into account with a very limited computational cost because it avoids the explicit calculation of the dispersion curve for each tentative model. It is used in a Monte Carlo inversion with a large population of profiles. In a subsequent step, the selection of representative models is obtained by applying a Fisher test based on the Euclidean distance between the experimental and the synthetic dispersion curves to the best models of the Monte Carlo inversion. This procedure allows the set of the selected models to be identified on the basis of the data quality. It also mitigates the influence of local minima that can affect the Monte Carlo results. The effectiveness of the procedure is shown for synthetic and real experimental data sets, where the advantages of the two-stage procedure are highlighted. In particular, the determinant misfit allows the computation of large populations in stochastic algorithms with a limited computational cost.

  10. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  11. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  12. Compressible generalized hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.

    2014-05-01

    One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.

  13. Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations

    SciTech Connect

    Edward W. Larson

    2004-02-17

    The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.

  14. Monte Carlo applications for the design and operation of nuclear facilities

    SciTech Connect

    Carter, L.L.; Bunch, W.L.; Morford, R.J.; Wootan, D.W.; Schwarz, R.A.

    1988-06-01

    The computational capabilities of current supercomputers enable the application of rigorous Monte Carlo methods to solve day-to-day neutronics and shielding problems. Experience at Westinghouse Hanford Company has included applications to: reactor operations, decommissioning of a reactor facility, and the design of a space reactor; intermediate energy accelerators; and high-level waste facilities and casks. These practical applications are typically computationally intensive because of the amount of information required. A number of practical examples are discussed. An increase in effective computer capabilities would further enhance the use of Monte Carlo methods. 16 refs., 4 figs., 2 tabs.

  15. Coupling Photon Monte Carlo Simulation and CAD Software. Application to X-ray Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Tabary, J.; Glière, A.

    A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.

  16. Rocket plume radiation base heating by reverse Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Everson, John; Nelson, H. F.

    1993-10-01

    A reverse Monte Carlo radiative transfer code is developed to predict rocket plume base heating. It is more computationally efficient than the forward Monte Carlo method, because only the radiation that strikes the receiving point is considered. The method easily handles both gas and particle emission and particle scattering. Band models are used for the molecular emission spectra, and the Henyey-Greenstein phase function is used for the scattering. Reverse Monte Carlo predictions are presented for (1) a gas-only model of the Space Shuttle main engine plume; (2) a purescattering plume with the radiation emitted by a hot disk at the nozzle exit; (3) a nonuniform temperature, scattering, emitting and absorbing plume; and (4) a typical solid rocket motor plume. The reverse Monte Carlo method is shown to give good agreement with previous predictions. Typical solid rocket plume results show that (1) CO2 radiation is emitted from near the edge of the plume; (2) H2O gas and Al2O3 particles emit radiation mainly from the center of the plume; and (3) Al2O3 particles emit considerably more radiation than the gases over the 400-17,000 cm(exp -1) spectral interval.

  17. Monte Carlo Simulation Of Emission Tomography And Other Medical Imaging Techniques

    PubMed Central

    Harrison, Robert L.

    2010-01-01

    An introduction to Monte Carlo simulation of emission tomography. This paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations. PMID:20733931

  18. Theory and Applications of Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Deible, Michael John

    With the development of peta-scale computers and exa-scale only a few years away, the quantum Monte Carlo (QMC) method, with favorable scaling and inherent parrallelizability, is poised to increase its impact on the electronic structure community. The most widely used variation of QMC is the diffusion Monte Carlo (DMC) method. The accuracy of the DMC method is only limited by the trial wave function that it employs. The effect of the trial wave function is studied here by initially developing correlation-consistent Gaussian basis sets for use in DMC calculations. These basis sets give a low variance in variance Monte Carlo calculations and improved convergence in DMC. The orbital type used in the trial wave function is then investigated, and it is shown that Brueckner orbitals result in a DMC energy comparable to a DMC energy with orbitals from density functional theory and significantly lower than orbitals from Hartree-Fock theory. Three large weakly interacting systems are then studied; a water-16 isomer, a methane clathrate, and a carbon dioxide clathrate. The DMC method is seen to be in good agreement with MP2 calculations and provides reliable benchmarks. Several strongly correlated systems are then studied. An H4 model system that allows for a fine tuning of the multi-configurational character of the wave function shows when the accuracy of the DMC method with a single Slater-determinant trial function begins to deviate from multi-reference benchmarks. The weakly interacting face-to-face ethylene dimer is studied with and without a rotation around the pi bond, which is used to increase the multi-configurational nature of the wave function. This test shows that the effect of a multi-configurational wave function in weakly interacting systems causes DMC with a single Slater-determinant to be unable to achieve sub-chemical accuracy. The beryllium dimer is studied, and it is shown that a very large determinant expansion is required for DMC to predict a binding

  19. Kinetic Monte Carlo investigation of tetragonal strain on Onsager matrices

    NASA Astrophysics Data System (ADS)

    Li, Zebo; Trinkle, Dallas R.

    2016-05-01

    We use three different methods to compute the derivatives of Onsager matrices with respect to strain for vacancy-mediated multicomponent diffusion from kinetic Monte Carlo simulations. We consider a finite difference method, a correlated finite difference method to reduce the relative statistical errors, and a perturbation theory approach to compute the derivatives. We investigate the statistical error behavior of the three methods for uncorrelated single vacancy diffusion in fcc Ni and for correlated vacancy-mediated diffusion of Si in Ni. While perturbation theory performs best for uncorrelated systems, the correlated finite difference method performs best for the vacancy-mediated Si diffusion in Ni, where longer trajectories are required.

  20. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  1. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  2. Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

    SciTech Connect

    Kiedrowski, Brian C.; Brown, Forrest B.

    2012-06-18

    An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

  3. Monte Carlo scatter correction for SPECT

    NASA Astrophysics Data System (ADS)

    Liu, Zemei

    The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.

  4. Monte Carlo Simulation of Critical Casimir Forces

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg A.

    2015-03-01

    In the vicinity of the second order phase transition point long-range critical fluctuations of the order parameter appear. The second order phase transition in a critical binary mixture in the vicinity of the demixing point belongs to the universality class of the Ising model. The superfluid transition in liquid He belongs to the universality class of the XY model. The confinement of long-range fluctuations causes critical Casimir forces acting on confining surfaces or particles immersed in the critical substance. Last decade critical Casimir forces in binary mixtures and liquid helium were studied experimentally. The critical Casimir force in a film of a given thickness scales as a universal scaling function of the ratio of the film thickness to the bulk correlation length divided over the cube of the film thickness. Using Monte Carlo simulations we can compute critical Casimir forces and their scaling functions for lattice Ising and XY models which correspond to experimental results for the binary mixture and liquid helium, respectively. This chapter provides the description of numerical methods for computation of critical Casimir interactions for lattice models for plane-plane, plane-particle, and particle-particle geometries.

  5. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  6. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  7. Direct aperture optimization for IMRT using Monte Carlo generated beamlets.

    PubMed

    Bergman, Alanah M; Bush, Karl; Milette, Marie-Pierre; Popescu, I Antoniu; Otto, Karl; Duzenli, Cheryl

    2006-10-01

    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5 X 5.0 mm2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is approximately 33% compared to fluence-based optimization methods. PMID:17089832

  8. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  9. Monte Carlo applications at Hanford Engineering Development Laboratory

    SciTech Connect

    Carter, L.L.; Morford, R.J.; Wilcox, A.D.

    1980-03-01

    Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.

  10. Monte carlo sampling of fission multiplicity.

    SciTech Connect

    Hendricks, J. S.

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  11. A standard timing benchmark for EGS4 Monte Carlo calculations.

    PubMed

    Bielajew, A F; Rogers, D W

    1992-01-01

    A Fortran 77 Monte Carlo source code built from the EGS4 Monte Carlo code system has been used for timing benchmark purposes on 29 different computers. This code simulates the deposition of energy from an incident electron beam in a 3-D rectilinear geometry such as one would employ to model electron and photon transport through a series of CT slices. The benchmark forms a standalone system and does not require that the EGS4 system be installed. The Fortran source code may be ported to different architectures by modifying a few lines and only a moderate amount of CPU time is required ranging from about 5 h on PC/386/387 to a few seconds on a massively parallel supercomputer (a BBN TC2000 with 512 processors). PMID:1584121

  12. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871

  13. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  14. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  15. Minimising biases in full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step. PMID:25770522

  16. Estimation of beryllium ground state energy by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-01

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  17. Research on GPU Acceleration for Monte Carlo Criticality Calculation

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Yu, Ganglin; Wang, Kan

    2014-06-01

    The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.

  18. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  19. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  20. Excited states of methylene from quantum Monte Carlo.

    PubMed

    Zimmerman, Paul M; Toulouse, Julien; Zhang, Zhiyong; Musgrave, Charles B; Umrigar, C J

    2009-09-28

    The ground and lowest three adiabatic excited states of methylene are computed using the variational Monte Carlo and diffusion Monte Carlo (DMC) methods using progressively larger Jastrow-Slater multideterminant complete active space (CAS) wave functions. The highest of these states has the same symmetry, (1)A(1), as the first excited state. The DMC excitation energies obtained using any of the CAS wave functions are in excellent agreement with experiment, but single-determinant wave functions do not yield accurate DMC energies of the states of (1)A(1) symmetry, indicating that it is important to include in the wave function Slater determinants that describe static (strong) correlation. Excitation energies obtained using recently proposed pseudopotentials [Burkatzki et al., J. Chem. Phys. 126, 234105 (2007)] differ from the all-electron excitation energies by at most 0.04 eV. PMID:19791848

  1. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  2. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  3. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  4. Modelling photon transport in non-uniform media for SPECT with a vectorized Monte Carlo code.

    PubMed

    Smith, M F

    1993-10-01

    A vectorized Monte Carlo code has been developed for modelling photon transport in non-uniform media for single-photon-emission computed tomography (SPECT). The code is designed to compute photon detection kernels, which are used to build system matrices for simulating SPECT projection data acquisition and for use in matrix-based image reconstruction. Non-uniform attenuating and scattering regions are constructed from simple three-dimensional geometric shapes, in which the density and mass attenuation coefficients are individually specified. On a Stellar GS1000 computer, Monte Carlo simulations are performed between 1.6 and 2.0 times faster when the vector processor is utilized than when computations are performed in scalar mode. Projection data acquired with a clinical SPECT gamma camera for a line source in a non-uniform thorax phantom are well modelled by Monte Carlo simulations. The vectorized Monte Carlo code was used to stimulate a 99Tcm SPECT myocardial perfusion study, and compensations for non-uniform attenuation and the detection of scattered photons improve activity estimation. The speed increase due to vectorization makes Monte Carlo simulation more attractive as a tool for modelling photon transport in non-uniform media for SPECT. PMID:8248288

  5. Alternative Computational Approaches for Probalistic Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Moore, N. R.; Grigoriu, M.

    1995-01-01

    The feasibility is discussed for alternative methods of direct Monte Carlo simulation for failure probability computations. First and second order reliability methods are used for fatigue crack growth and low cycle fatigue structural failure modes to illustrate typical problems.

  6. APR1400 LBLOCA uncertainty quantification by Monte Carlo method and comparison with Wilks' formula

    SciTech Connect

    Hwang, M.; Bae, S.; Chung, B. D.

    2012-07-01

    An analysis of the uncertainty quantification for the PWR LBLOCA by the Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LBLOCA accident were determined by the PIRT results from the BEMUSE project. The Monte-Carlo method shows that the 95. percentile PCT value can be obtained reliably with a 95% confidence level using the Wilks' formula. The extra margin by the Wilks' formula over the true 95. percentile PCT by the Monte-Carlo method was rather large. Even using the 3 rd order formula, the calculated value using the Wilks' formula is nearly 100 K over the true value. It is shown that, with the ever increasing computational capability, the Monte-Carlo method is accessible for the nuclear power plant safety analysis within a realistic time frame. (authors)

  7. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

  8. Composite sequential Monte Carlo test for post-market vaccine safety surveillance.

    PubMed

    Silva, Ivair R

    2016-04-30

    Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26561330

  9. Never trust straightforward intuition when choosing the number of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Nowak, Wolfgang; de Barros, Felipe; Rajagopal, Ram

    2013-04-01

    Uncertainty quantification for predicting flow- and transport in heterogeneous aquifers often entails Monte Carlo simulations executed on top of random field generation. Typically, the number of Monte Carlo simulations ranges between 500-1000, sometimes even higher. In many cases, this choice is based on restricted available computational time, or on convergence analysis of the Monte Carlo simulations. The spatial resolution is most frequently fixed to experience values from the literature, independent of the number of Monte Carlo realizations. Sometimes, a compromise is found between spatial resolution, Monte Carlo resolution and available computational time. We question this practice, because it does not look at the trade-off between the individual resolution, individual errors, total errors and computational time. Our goal is to show that what models really want is neither poor statistics of good physics, nor good statistics of pour physics. Instead, one should look for an overall optimum choice in both decisions. To this end, we assess an optimum for the number of Monte Carlo simulations together with the spatial resolution of computational models. Our analysis is based on the idea to jointly consider the discretization errors and computational costs of all individual model dimensions (physical space, time, parameter space). This yields a cost-to-error surface which serves to aid modelers in finding an optimal allocation of the computational resources. The optimal allocation yields highest accuracy associated with a given prediction goal for a given computational budget. We illustrate our approach with two examples from subsurface hydrogeology. The examples are taken from wetland management and from a remediation design problem. When comparing the two different optimum allocation patterns among each other and to typical values found in the literature, we make counterintuitive observations. For example, a realistic number of Monte Carlo realizations should be

  10. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  11. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  12. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  13. Monte Carlo simulation framework for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Angeli, George Z.

    2008-07-01

    This presentation describes a strategy for assessing the performance of the Thirty Meter Telescope (TMT). A Monte Carlo Simulation Framework has been developed to combine optical modeling with Computational Fluid Dynamics simulations (CFD), Finite Element Analysis (FEA) and controls to model the overall performance of TMT. The framework consists of a two year record of observed environmental parameters such as atmospheric seeing, site wind speed and direction, ambient temperature and local sunset and sunrise times, along with telescope azimuth and elevation with a given sampling rate. The modeled optical, dynamic and thermal seeing aberrations are available in a matrix form for distinct values within the range of influencing parameters. These parameters are either part of the framework parameter set or can be derived from them at each time-step. As time advances, the aberrations are interpolated and combined based on the current value of their parameters. Different scenarios can be generated based on operating parameters such as venting strategy, optical calibration frequency and heat source control. Performance probability distributions are obtained and provide design guidance. The sensitivity of the system to design, operating and environmental parameters can be assessed in order to maximize the % of time the system meets the performance specifications.

  14. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  15. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  16. Performance of quantum Monte Carlo for calculating molecular bond lengths

    NASA Astrophysics Data System (ADS)

    Cleland, Deidre M.; Per, Manolo C.

    2016-03-01

    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10-3 Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10-3 Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.

  17. Chemical accuracy from quantum Monte Carlo for the benzene dimer

    SciTech Connect

    Azadi, Sam; Cohen, R. E.

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  18. A Wigner Monte Carlo approach to density functional theory

    NASA Astrophysics Data System (ADS)

    Sellier, J. M.; Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn-Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn-Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.

  19. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  20. On Monte Carlo Methods and Applications in Geoscience

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Blais, J.

    2009-05-01

    Monte Carlo methods are designed to study various deterministic problems using probabilistic approaches, and with computer simulations to explore much wider possibilities for the different algorithms. Pseudo- Random Number Generators (PRNGs) are based on linear congruences of some large prime numbers, while Quasi-Random Number Generators (QRNGs) provide low discrepancy sequences, both of which giving uniformly distributed numbers in (0,1). Chaotic Random Number Generators (CRNGs) give sequences of 'random numbers' satisfying some prescribed probabilistic density, often denser around the two corners of interval (0,1), but transforming this type of density to a uniform one is usually possible. Markov Chain Monte Carlo (MCMC), as indicated by its name, is associated with Markov Chain simulations. Basic descriptions of these random number generators will be given, and a comparative analysis of these four methods will be included based on their efficiencies and other characteristics. Some applications in geoscience using Monte Carlo simulations will be described, and a comparison of these algorithms will also be included with some concluding remarks.

  1. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    NASA Astrophysics Data System (ADS)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  2. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods. PMID:26374029

  3. Kinetic Monte Carlo with fields: diffusion in heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Caro, Jose Alfredo

    2011-03-01

    It is commonly perceived that to achieve breakthrough scientific discoveries in the 21st century an integration of world leading experimental capabilities with theory, computational modeling and high performance computer simulations is necessary. Lying between the atomic and the macro scales, the meso scale is crucial for advancing materials research. Deterministic methods result computationally too heavy to cover length and time scales relevant for this scale. Therefore, stochastic approaches are one of the options of choice. In this talk I will describe recent progress in efficient parallelization schemes for Metropolis and kinetic Monte Carlo [1-2], and the combination of these ideas into a new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm developed to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields, incorporating in this way the actual driving force emerging from chemical potential gradients. Applications are shown on precipitation and segregation in nanostructured materials. Work in collaboration with E. Martinez, LANL, and with B. Sadigh, P. Erhart and A. Stukowsky, LLNL. Supported by the Center for Materials at Irradiation and Mechanical Extremes, an Energy Frontier Research Center funded by the U.S. Department of Energy (Award # 2008LANL1026) at Los Alamos National Laboratory

  4. Monte Carlo simulation studies of backscatter factors in mammography

    SciTech Connect

    Chan, H.P.; Doi, K.

    1981-04-01

    Experimentally determined backscatter factors in mammography can contain significant systematic errors due to the energy response, dimensions, and location of the dosimeter used. In this study, the Monte Carlo method was applied to simulate photon scattering in tissue-equivalent media and to determine backscatter factors without the interference of a detector. The physical processes of measuring backscatter factors with a lithium fluoride thermoluminescent dosimeter (TLD) and an ideal tissue-equivalent detector were also simulated. Computed results were compared with the true backscatter factors and with measured values reported by other investigators. It was found that the TLD method underestimated backscatter factors in mammography by as much as 10% at high energies.

  5. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  6. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  7. Constrained Path Quantum Monte Carlo Method for Fermion Ground States

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwei; Carlson, J.; Gubernatis, J. E.

    1995-05-01

    We propose a new quantum Monte Carlo algorithm to compute fermion ground-state properties. The ground state is projected from an initial wave function by a branching random walk in an over-complete basis space of Slater determinants. By constraining the determinants according to a trial wave function \\|ΨT>, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if \\|ΨT> is exact. We report results on the two-dimensional Hubbard model up to size 16×16, for various electron fillings and interaction strengths.

  8. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  9. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  10. A Monte Carlo method for 3D thermal infrared radiative transfer

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liou, K. N.

    2006-09-01

    A 3D Monte Carlo model for specific application to the broadband thermal radiative transfer has been developed in which the emissivities for gases and cloud particles are parameterized by using a single cubic element as the building block in 3D space. For spectral integration in the thermal infrared, the correlated k-distribution method has been used for the sorting of gaseous absorption lines in multiple-scattering atmospheres involving 3D clouds. To check the Monte-Carlo simulation, we compare a variety of 1D broadband atmospheric fluxes and heating rates to those computed from the conventional plane-parallel (PP) model and demonstrate excellent agreement between the two. Comparisons of the Monte Carlo results for broadband thermal cooling rates in 3D clouds to those computed from the delta-diffusion approximation for 3D radiative transfer and the independent pixel-by-pixel approximation are subsequently carried out to understand the relative merits of these approaches.

  11. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.

    PubMed

    Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L

    2003-02-01

    Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310

  12. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  13. Novel Hybrid Monte Carlo/Deterministic Technique for Shutdown Dose Rate Analyses of Fusion Energy Systems

    SciTech Connect

    Ibrahim, Ahmad M; Peplow, Douglas E.; Peterson, Joshua L; Grove, Robert E

    2013-01-01

    The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.

  14. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    SciTech Connect

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  15. A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems

    NASA Astrophysics Data System (ADS)

    Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.

    2001-06-01

    We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.

  16. Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning

    PubMed Central

    Jabbari, Keyvan

    2011-01-01

    An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661

  17. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  18. Monte Carlo direct view factor and generalized radiative heat transfer programs

    NASA Technical Reports Server (NTRS)

    Mc Williams, J. L.; Scates, J. H.

    1969-01-01

    Computer programs find the direct view factor from one surface segment to another using the Monte carlo technique, and the radioactive-transfer coefficients between surface segments. An advantage of the programs is the great generality of problems treatable and rapidity of solution from problem conception to receipt of results.

  19. First-Order or Second-Order Kinetics? A Monte Carlo Answer

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…

  20. Monte Carlo molecular simulation predictions for the heat of vaporization of acetone and butyramide.

    SciTech Connect

    Biddy, Mary J.; Martin, Marcus Gary

    2005-03-01

    Vapor pressure and heats of vaporization are computed for the industrial fluid properties simulation challenge (IFPSC) data set using the Towhee Monte Carlo molecular simulation program. Results are presented for the CHARMM27 and OPLS-aa force fields. Once again, the average result using multiple force fields is a better predictor of the experimental value than either individual force field.

  1. Overview and applications of the Monte Carlo radiation transport kit at LLNL

    SciTech Connect

    Sale, K E

    1999-06-23

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

  2. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  3. Analyzing the Results of Monte Carlo Studies in Item Response Theory.

    ERIC Educational Resources Information Center

    Harwell, Michael R.

    1997-01-01

    Results from two Monte Carlo studies in item response theory (comparisons of computer item analysis programs and Bayes estimation procedures) are analyzed with inferential methods to illustrate the procedures' strengths. It is recommended that researchers in item response theory use both descriptive and inferential methods to analyze Monte Carlo…

  4. Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm

    ERIC Educational Resources Information Center

    Stewart, Wayne; Stewart, Sepideh

    2014-01-01

    For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…

  5. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  6. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  7. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  8. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  9. Properties of reactive oxygen species by quantum Monte Carlo.

    PubMed

    Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles. PMID:25005287

  10. Properties of reactive oxygen species by quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  11. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  12. Monte Carlo Ion Transport Analysis Code.

    Energy Science and Technology Software Center (ESTSC)

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  13. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  14. Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Roberti, Laura

    1998-01-01

    The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.

  15. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    PubMed Central

    Siswantoro, Joko; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069

  16. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  17. Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport

    NASA Astrophysics Data System (ADS)

    Jacqmin, Dustin J.

    Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and

  18. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    PubMed

    Ghassoun; Jehouani

    2000-10-01

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions. PMID:11003535

  19. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  20. Fast evaluation of multideterminant wavefunctions in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Morales, Miguel A.; Clark, Bryan K.; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo

    2011-03-01

    Quantum Monte Carlo (QMC) methods such as variational and diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multislater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of slater determinants. In this talk we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient, and easily parallelized. The algorithm computes the multi determinant ratios of a series of particle hole excitations in time O(n 2) + O(n s n)+O(n e) where n, n s and n e are the number of particles, single particle excitations, and total number of excitations, respectively. This is accomplished by producing a (relatively) compact table that contains all the information required to read off the excitation ratios. In addition we describe how to compute the gradients and laplacians of these multi determinant terms. This work was performed under the auspices of: the US DOE by LLNL under Contract DE-AC52-07NA27344, the US DOE under Contract DOE-DE-FG05-08OR23336 and by NSF under No.0904572.

  1. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  2. A General-Purpose Monte Carlo Gamma-Ray Transport Code System for Minicomputers.

    Energy Science and Technology Software Center (ESTSC)

    1981-08-27

    Version 00 The OGRE code system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two codes which treat slab geometry. OGRE-P1 computes the dose on one side of a slab for a source on the other side, and HOTONE computes energy deposition in addition. The source may be monodirectional, isotropic, or cosine distributed.

  3. Combining four Monte Carlo estimators for radiation momentum deposition

    SciTech Connect

    Urbatsch, Todd J; Hykes, Joshua M

    2010-11-18

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the gain in FOM is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10-20% greater than any of the solo estimators FOM. In addition, the numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions.

  4. Monte Carlo Modeling of High-Energy Film Radiography

    SciTech Connect

    Miller, A.C., Jr.; Cochran, J.L.; Lamberti, V.E.

    2003-03-28

    High-energy film radiography methods, adapted in the past to performing specific tasks, must now meet increasing demands to identify defects and perform critical measurements in a wide variety of manufacturing processes. Although film provides unequaled resolution for most components and assemblies, image quality must be enhanced with much more detailed information to identify problems and qualify features of interest inside manufactured items. The work described is concerned with improving current 9 MeV nondestructive practice by optimizing the important parameters involved in film radiography using computational methods. In order to follow important scattering effects produced by electrons, the Monte Carlo N-Particle (MCNP) transport code was used with advanced, highly parallel computer systems. The work has provided a more detailed understanding of latent image formation at high X-ray energies, and suggests that improvements can be made in our ability to identify defects and to obtain much more detail in images of fine features.

  5. Stabilized multilevel Monte Carlo method for stiff stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Abdulle, Assyr; Blumenthal, Adrian

    2013-10-01

    A multilevel Monte Carlo (MLMC) method for mean square stable stochastic differential equations with multiple scales is proposed. For such problems, that we call stiff, the performance of MLMC methods based on classical explicit methods deteriorates because of the time step restriction to resolve the fastest scales that prevents to exploit all the levels of the MLMC approach. We show that by switching to explicit stabilized stochastic methods and balancing the stabilization procedure simultaneously with the hierarchical sampling strategy of MLMC methods, the computational cost for stiff systems is significantly reduced, while keeping the computational algorithm fully explicit and easy to implement. Numerical experiments on linear and nonlinear stochastic differential equations and on a stochastic partial differential equation illustrate the performance of the stabilized MLMC method and corroborate our theoretical findings.

  6. Monte Carlo dosimetry for synchrotron stereotactic radiotherapy of brain tumours

    NASA Astrophysics Data System (ADS)

    Boudou, Caroline; Balosso, Jacques; Estève, François; Elleaume, Hélène

    2005-10-01

    A radiation dose enhancement can be obtained in brain tumours after infusion of an iodinated contrast agent and irradiation with kilovoltage x-rays in tomography mode. The aim of this study was to assess dosimetric properties of the synchrotron stereotactic radiotherapy technique applied to humans (SSR) for preparing clinical trials. We designed an interface for dose computation based on a Monte Carlo code (MCNPX). A patient head was constructed from computed tomography (CT) data and a tumour volume was modelled. Dose distributions were calculated in SSR configuration for various energy beam and iodine content in the target volume. From the calculations, it appears that the iodine-filled target (10 mg ml-1) can be efficiently irradiated by a monochromatic beam of energy ranging from 50 to 85 keV. This paper demonstrates the feasibility of stereotactic radiotherapy for treating deep-seated brain tumours with monoenergetic x-rays from a synchrotron.

  7. Methods for variance reduction in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  8. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  9. Monte Carlo calculations of (e,e{prime}p) reactions

    SciTech Connect

    Pieper, S.C.; Pandharipande, V.R.; Boffi, S.; Radici, M.

    1995-08-01

    We have used our {sup 16}O Monte Carlo program to compute the p{sub 3/2} quasihole wave function in {sup 16}O and the Pavia program to compute {sup 16}O(e,e{prime}p) {sup 15}N(3/2{sup -}) with this wave function. We also developed a local-density approximation (LDA) for obtaining the quasihole wave function from a mean-field wave function, and studied the effects of using this LDA on the outgoing distorted waves. We find that we can predict correctly the contribution of the interior of the nucleus to the observed (e,e{prime}p) cross sections, but the surface contribution is too large. The LDA modifications to the outgoing wave function are small.

  10. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  11. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  12. Quantum Monte Carlo with very large multideterminant wavefunctions.

    PubMed

    Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel

    2016-07-01

    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants,  Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. PMID:27302337

  13. Quadric solids and computational geometry

    SciTech Connect

    Emery, J.D.

    1980-07-25

    As part of the CAD-CAM development project, this report discusses the mathematics underlying the program QUADRIC, which does computations on objects modeled as Boolean combinations of quadric half-spaces. Topics considered include projective space, quadric surfaces, polars, affine transformations, the construction of solids, shaded image, the inertia tensor, moments, volume, surface integrals, Monte Carlo integration, and stratified sampling. 1 figure.

  14. Energy-Driven Kinetic Monte Carlo Method and Its Application in Fullerene Coalescence.

    PubMed

    Ding, Feng; Yakobson, Boris I

    2014-09-01

    Mimicking the conventional barrier-based kinetic Monte Carlo simulation, an energy-driven kinetic Monte Carlo (EDKMC) method was developed to study the structural transformation of carbon nanomaterials. The new method is many orders magnitude faster than standard molecular dynamics or Monte Marlo (MC) simulations and thus allows us to explore rare events within a reasonable computational time. As an example, the temperature dependence of fullerene coalescence was studied. The simulation, for the first time, revealed that short capped single-walled carbon nanotubes (SWNTs) appear as low-energy metastable structures during the structural evolution. PMID:26278237

  15. Automated-biasing approach to Monte Carlo shipping-cask calculations

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.

    1982-01-01

    Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system.

  16. A New Method for the Calculation of Diffusion Coefficients with Monte Carlo

    NASA Astrophysics Data System (ADS)

    Dorval, Eric

    2014-06-01

    This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.

  17. Improved diffusion Monte Carlo and the Brownian fan

    NASA Astrophysics Data System (ADS)

    Weare, J.; Hairer, M.

    2012-12-01

    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown

  18. Approaching chemical accuracy with quantum Monte Carlo.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2012-03-28

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space. PMID:22462844

  19. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  20. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  1. Neutron streaming through shield ducts using a discrete ordinates/Monte Carlo method

    SciTech Connect

    Urban, W.T.; Baker, R.S.

    1993-08-18

    A common problem in shield design is determining the neutron flux that streams through ducts in shields and also that penetrates the shield after having traveled partway down the duct. Obviously the determination of the neutrons that stream down the duct can be computed in a straightforward manner using Monte Carlo techniques. On the other hand those neutrons that must penetrate a significant portion of the shield are more easily handled using discrete ordinates methods. A hybrid discrete ordinates/Monte Carlo cods, TWODANT/MC, which is an extension of the existing discrete ordinates code TWODANT, has been developed at Los Alamos to allow the efficient, accurate treatment of both streaming and deep penetration problems in a single calculation. In this paper we provide examples of the application of TWODANT/MC to typical geometries that are encountered in shield design and compare the results with those obtained using the Los Alamos Monte Carlo code MCNP{sup 3}.

  2. Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods

    SciTech Connect

    Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.

  3. Accuracy of electronic wave functions in quantum Monte Carlo: The effect of high-order correlations

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Jung; Umrigar, C. J.; Nightingale, M. P.

    1997-08-01

    Compact and accurate wave functions can be constructed by quantum Monte Carlo methods. Typically, these wave functions consist of a sum of a small number of Slater determinants multiplied by a Jastrow factor. In this paper we study the importance of including high-order, nucleus-three-electron correlations in the Jastrow factor. An efficient algorithm based on the theory of invariants is used to compute the high-body correlations. We observe significant improvements in the variational Monte Carlo energy and in the fluctuations of the local energies but not in the fixed-node diffusion Monte Carlo energies. Improvements for the ground states of physical, fermionic atoms are found to be smaller than those for the ground states of fictitious, bosonic atoms, indicating that errors in the nodal surfaces of the fermionic wave functions are a limiting factor.

  4. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  5. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  6. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    SciTech Connect

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali

    2014-12-28

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.

  7. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It

  8. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  9. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    SciTech Connect

    Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2012-07-01

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)

  10. Computing support for High Energy Physics

    SciTech Connect

    Avery, P.; Yelton, J.

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  11. Spatial Correlations in Monte Carlo Criticality Simulations

    NASA Astrophysics Data System (ADS)

    Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.

    2014-06-01

    Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.

  12. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  13. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    SciTech Connect

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  14. Clinical implementation of the Peregrine Monte Carlo dose calculations system for photon beam therapy

    SciTech Connect

    Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

    1999-07-01

    PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

  15. Fast Monte Carlo for ion beam analysis simulations

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François

    2008-04-01

    A Monte Carlo program for the simulation of ion beam analysis data is presented. It combines mainly four features: (i) ion slowdown is computed separately from the main scattering/recoil event, which is directed towards the detector. (ii) A virtual detector, that is, a detector larger than the actual one can be used, followed by trajectory correction. (iii) For each collision during ion slowdown, scattering angle components are extracted form tables. (iv) Tables of scattering angle components, stopping power and energy straggling are indexed using the binary representation of floating point numbers, which allows logarithmic distribution of these tables without the computation of logarithms to access them. Tables are sufficiently fine-grained that interpolation is not necessary. Ion slowdown computation thus avoids trigonometric, inverse and transcendental function calls and, as much as possible, divisions. All these improvements make possible the computation of 107 collisions/s on current PCs. Results for transmitted ions of several masses in various substrates are well comparable to those obtained using SRIM-2006 in terms of both angular and energy distributions, as long as a sufficiently large number of collisions is considered for each ion. Examples of simulated spectrum show good agreement with experimental data, although a large detector rather than the virtual detector has to be used to properly simulate background signals that are due to plural collisions. The program, written in standard C, is open-source and distributed under the terms of the GNU General Public License.

  16. Moments of spectral functions: Monte Carlo evaluation and verification.

    PubMed

    Predescu, Cristian

    2005-11-01

    The subject of the present study is the Monte Carlo path-integral evaluation of the moments of spectral functions. Such moments can be computed by formal differentiation of certain estimating functionals that are infinitely differentiable against time whenever the potential function is arbitrarily smooth. Here, I demonstrate that the numerical differentiation of the estimating functionals can be more successfully implemented by means of pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial interpolant), which utilize information from the entire interval . The algorithmic detail that leads to robust numerical approximations is the fact that the path-integral action and not the actual estimating functional are interpolated. Although the resulting approximation to the estimating functional is nonlinear, the derivatives can be computed from it in a fast and stable way by contour integration in the complex plane, with the help of the Cauchy integral formula (e.g., by Lyness' method). An interesting aspect of the present development is that Hamburger's conditions for a finite sequence of numbers to be a moment sequence provide the necessary and sufficient criteria for the computed data to be compatible with the existence of an inversion algorithm. Finally, the issue of appearance of the sign problem in the computation of moments, albeit in a milder form than for other quantities, is addressed. PMID:16383787

  17. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  18. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  19. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  20. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  1. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  2. CT based 3D Monte Carlo radiation therapy treatment planning.

    PubMed

    Wallace, S; Allen, B J

    1998-06-01

    This paper outlines the "voxel reconstruction" technique used to model the macroscopic human anatomy of the cranial, abdominal and cervical regions directly from CT scans. Tissue composition, density, and radiation transport characteristics were assigned to each individual volume element (voxel) automatically depending on its greyscale number and physical location. Both external beam and brachytherapy treatment techniques were simulated using the Monte Carlo radiation transport code MCNP (Monte Carlo N-Particle) version 3A. To obtain a high resolution dose calculation, yet not overly extend computational times, variable voxel sizes have been introduced. In regions of interest where high attention to anatomical detail and dose calculation was required, the voxel dimensions were reduced to a few millimetres. In less important regions that only influence the region of interest via scattered radiation, the voxel dimensions were increased to the scale of centimetres. With the use of relatively old (1991) supercomputing hardware, dose calculations were performed in under 10 hours to a standard deviation of 5% in each voxel with a resolution of a few millimetres--current hardware should substantially improve these figures. It is envisaged that with coupled photon/electron transport incorporated into MCNP version 4A and 4B, conventional photon and electron treatment planning will be undertaken using this technique, in addition to neutron and associated photon dosimetry presented here. PMID:9745789

  3. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  4. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  5. Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoyan; Lane, Stephen

    2010-02-01

    We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.

  6. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  7. Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians

    SciTech Connect

    Mason, D R; Rudd, R E; Sutton, A P

    2003-10-13

    We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.

  8. Monte Carlo simulation of classical spin models with chaotic billiards

    NASA Astrophysics Data System (ADS)

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  9. Quantum Monte Carlo Simulations of Correlated-Electron Models

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwei

    1996-05-01

    We briefly review quantum Monte Carlo simulation methods for strongly correlated fermion systems and the well-known ``sign'' problem that plagues these methods. We then discuss recent efforts to overcome the problem in the context of simulations of lattice models of electron correlations. In particular, we describe a new algorithm^1, called the constrained path Monte Carlo (CPMC), for studying ground-state (T=0K) properties. It has the form of a random walk in a space of mean-field solutions (Slater determinants); the exponential decay of ``sign'' or signal-to-noise ratio is eliminated by constraining the paths of the random walk according to a known trial wave function. Applications of this algorithm to the Hubbard model have enabled accurate and systematic studies of correlation functions, including s- and d-wave pairings, and hence the long-standing problem of the model's relevance to superconductivity. The method is directly applicable to a variety of other models important to understand high-Tc superconductors and heavy-fermion compounds. In addition, it is expected to be useful to simulations of nuclei, atoms, molecules, and solids. We also comment on possible extensions of the algorithm to finite-temperature calculations. Work supported in part by the Department of Energy's High Performance Computing and Communication Program at Los Alamos National Laboratory, and at OSU by DOE-Basic Energy Sciences, Division of Materials Sciences. ^1 Shiwei Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. Lett. 74, 3652 (1995).

  10. James Webb Space Telescope (JWST) Stationkeeping Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald J.; Alberding, Cassandra; Yu, Wayne

    2014-01-01

    The James Webb Space Telescope (JWST) will launch in 2018 into a Libration Point Orbit (LPO) around the Sun-EarthMoon (SEM) L2 point, with a planned mission lifetime of 11 years. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.

  11. Monte Carlo field-theoretic simulations of a homopolymer blend

    NASA Astrophysics Data System (ADS)

    Spencer, Russell; Matsen, Mark

    Fluctuation corrections to the macrophase segregation transition (MST) in a symmetric homopolymer blend are examined using Monte Carlo field-theoretic simulations (MC-FTS). This technique involves treating interactions between unlike monomers using standard Monte-Carlo techniques, while enforcing incompressibility as is done in mean-field theory. When using MC-FTS, we need to account for a UV divergence. This is done by renormalizing the Flory-Huggins interaction parameter to incorporate the divergent part of the Hamiltonian. We compare different ways of calculating this effective interaction parameter. Near the MST, the length scale of compositional fluctuations becomes large, however, the high computational requirements of MC-FTS restrict us to small system sizes. We account for these finite size effects using the method of Binder cumulants, allowing us to locate the MST with high precision. We examine fluctuation corrections to the mean field MST, χN = 2 , as they vary with the invariant degree of polymerization, N =ρ2a6 N . These results are compared with particle-based simulations as well as analytical calculations using the renormalized one loop theory. This research was funded by the Center for Sustainable Polymers.

  12. Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.

    2014-01-01

    The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.

  13. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  14. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    SciTech Connect

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  15. Worm algorithm and diagrammatic Monte Carlo: A new approach to continuous-space path integral Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Boninsegni, M.; Prokof'Ev, N. V.; Svistunov, B. V.

    2006-09-01

    A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature. The algorithm is formulated within the general path integral Monte Carlo (PIMC) scheme, but also allows one to perform quantum simulations in the grand canonical ensemble, as well as to compute off-diagonal imaginary-time correlation functions, such as the Matsubara Green function, simultaneously with diagonal observables. Another important innovation consists of the expansion of the attractive part of the pairwise potential energy into elementary (diagrammatic) contributions, which are then statistically sampled. This affords a complete microscopic account of the long-range part of the potential energy, while keeping the computational complexity of all updates independent of the size of the simulated system. The computational scheme allows for efficient calculations of the superfluid fraction and off-diagonal correlations in space-time, for system sizes which are orders of magnitude larger than those accessible to conventional PIMC. We present illustrative results for the superfluid transition in bulk liquid He4 in two and three dimensions, as well as the calculation of the chemical potential of hcp He4 .

  16. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    SciTech Connect

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    1989-01-01

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

  17. K-effective of the world: and other concerns for Monte Carlo Eigenvalue calculations

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental model eigenfunction of critical systems since the 1950s. Despite the sophistication of today's Monte Carlo codes for representing realistic geometry and physics interactions, correct results can be obtained in criticality problems only if users pay attention to source convergence in the Monte Carlo iterations and to running a sufficient number of neutron histories to adequately sample all significant regions of the problem. Recommended best practices for criticality calculations are reviewed and applied to several practical problems for nuclear reactors and criticality safety, including the 'K-effective of the World' problem. Numerical results illustrate the concerns about convergence and bias. The general conclusion is that with today's high-performance computers, improved understanding of the theory, new tools for diagnosing convergence (e.g., Shannon entropy of the fission distribution), and clear practical guidance for performing calculations, practitioners will have a greater degree of confidence than ever of obtaining correct results for Monte Carlo criticality calculations.

  18. Monte Carlo studies of APEX

    SciTech Connect

    Ahmad, I.; Back, B.B.; Betts, R.R.

    1995-08-01

    An essential component in the assessment of the significance of the results from APEX is a demonstrated understanding of the acceptance and response of the apparatus. This requires detailed simulations which can be compared to the results of various source and in-beam measurements. These simulations were carried out using the computer codes EGS and GEANT, both specifically designed for this purpose. As far as is possible, all details of the geometry of APEX were included. We compared the results of these simulations with measurements using electron conversion sources, positron sources and pair sources. The overall agreement is quite acceptable and some of the details are still being worked on. The simulation codes were also used to compare the results of measurements of in-beam positron and conversion electrons with expectations based on known physics or other methods. Again, satisfactory agreement is achieved. We are currently working on the simulation of various pair-producing scenarios such as the decay of a neutral object in the mass range 1.5-2.0 MeV and also the emission of internal pairs from nuclear transitions in the colliding ions. These results are essential input to the final results from APEX on cross section limits for various, previously proposed, sharp-line producing scenarios.

  19. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  20. Juan Carlos D'Olivo: A portrait

    NASA Astrophysics Data System (ADS)

    Aguilar-Arévalo, Alexis A.

    2013-06-01

    This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.

  1. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  2. MCMAC: Monte Carlo Merger Analysis Code

    NASA Astrophysics Data System (ADS)

    Dawson, William A.

    2014-07-01

    Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).

  3. A comparison of Monte Carlo generators

    NASA Astrophysics Data System (ADS)

    Golan, Tomasz

    2015-05-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π+ two-dimensional energy vs cosine distribution.

  4. Monte Carlo simulations of lattice gauge theories

    SciTech Connect

    Rebbi, C

    1980-02-01

    Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

  5. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  6. A general method for spatially coarse-graining Metropolis Monte Carlo simulations onto a lattice

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Seider, Warren D.; Sinno, Talid

    2013-03-01

    A recently introduced method for coarse-graining standard continuous Metropolis Monte Carlo simulations of atomic or molecular fluids onto a rigid lattice of variable scale [X. Liu, W. D. Seider, and T. Sinno, Phys. Rev. E 86, 026708 (2012)], 10.1103/PhysRevE.86.026708 is further analyzed and extended. The coarse-grained Metropolis Monte Carlo technique is demonstrated to be highly consistent with the underlying full-resolution problem using a series of detailed comparisons, including vapor-liquid equilibrium phase envelopes and spatial density distributions for the Lennard-Jones argon and simple point charge water models. In addition, the principal computational bottleneck associated with computing a coarse-grained interaction function for evolving particle positions on the discretized domain is addressed by the introduction of new closure approximations. In particular, it is shown that the coarse-grained potential, which is generally a function of temperature and coarse-graining level, can be computed at multiple temperatures and scales using a single set of free energy calculations. The computational performance of the method relative to standard Monte Carlo simulation is also discussed.

  7. Noise-Parameter Uncertainties: A Monte Carlo Simulation

    PubMed Central

    Randa, J.

    2002-01-01

    This paper reports the formulation and results of a Monte Carlo study of uncertainties in noise-parameter measurements. The simulator permits the computation of the dependence of the uncertainty in the noise parameters on uncertainties in the underlying quantities. Results are obtained for the effect due to uncertainties in the reflection coefficients of the input terminations, the noise temperature of the hot noise source, connector variability, the ambient temperature, and the measurement of the output noise. Representative results are presented for both uncorrelated and correlated uncertainties in the underlying quantities. The simulation program is also used to evaluate two possible enhancements of noise-parameter measurements: the use of a cold noise source as one of the input terminations and the inclusion of a measurement of the “reverse configuration,” in which the noise from the amplifier input is measured directly.

  8. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  9. RMC - A Monte Carlo code for reactor physics analysis

    SciTech Connect

    Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G.

    2013-07-01

    A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)

  10. Monte Carlo PENRADIO software for dose calculation in medical imaging

    NASA Astrophysics Data System (ADS)

    Adrien, Camille; Lòpez Noriega, Mercedes; Bonniaud, Guillaume; Bordy, Jean-Marc; Le Loirec, Cindy; Poumarede, Bénédicte

    2014-06-01

    The increase on the collective radiation dose due to the large number of medical imaging exams has led the medical physics community to deeply consider the amount of dose delivered and its associated risks in these exams. For this purpose we have developed a Monte Carlo tool, PENRADIO, based on a modified version of PENELOPE code 2006 release, to obtain an accurate individualized radiation dose in conventional and interventional radiography and in computed tomography (CT). This tool has been validated showing excellent agreement between the measured and simulated organ doses in the case of a hip conventional radiography and a coronography. We expect the same accuracy in further results for other localizations and CT examinations.

  11. Quantum Monte Carlo Calculations in Solids with Downfolded Hamiltonians

    NASA Astrophysics Data System (ADS)

    Ma, Fengjie; Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry

    2015-06-01

    We present a combination of a downfolding many-body approach with auxiliary-field quantum Monte Carlo (AFQMC) calculations for extended systems. Many-body calculations operate on a simpler Hamiltonian which retains material-specific properties. The Hamiltonian is systematically improvable and allows one to dial, in principle, between the simplest model and the original Hamiltonian. As a by-product, pseudopotential errors are essentially eliminated using frozen orbitals constructed adaptively from the solid environment. The computational cost of the many-body calculation is dramatically reduced without sacrificing accuracy. Excellent accuracy is achieved for a range of solids, including semiconductors, ionic insulators, and metals. We apply the method to calculate the equation of state of cubic BN under ultrahigh pressure, and determine the spin gap in NiO, a challenging prototypical material with strong electron correlation effects.

  12. A study of Monte Carlo radiative transfer through fractal clouds

    SciTech Connect

    Gautier, C.; Lavallec, D.; O`Hirok, W.; Ricchiazzi, P.

    1996-04-01

    An understanding of radiation transport (RT) through clouds is fundamental to studies of the earth`s radiation budget and climate dynamics. The transmission through horizontally homogeneous clouds has been studied thoroughly using accurate, discreet ordinates radiative transfer models. However, the applicability of these results to general problems of global radiation budget is limited by the plane parallel assumption and the fact that real clouds fields show variability, both vertically and horizontally, on all size scales. To understand how radiation interacts with realistic clouds, we have used a Monte Carlo radiative transfer model to compute the details of the photon-cloud interaction on synthetic cloud fields. Synthetic cloud fields, generated by a cascade model, reproduce the scaling behavior, as well as the cloud variability observed and estimated from cloud satellite data.

  13. Direct Simulation Monte Carlo (DSMC) on the Connection Machine

    SciTech Connect

    Wong, B.C.; Long, L.N. )

    1992-01-01

    The massively parallel computer Connection Machine is utilized to map an improved version of the direct simulation Monte Carlo (DSMC) method for solving flows with the Boltzmann equation. The kinetic theory is required for analyzing hypersonic aerospace applications, and the features and capabilities of the DSMC particle-simulation technique are discussed. The DSMC is shown to be inherently massively parallel and data parallel, and the algorithm is based on molecule movements, cross-referencing their locations, locating collisions within cells, and sampling macroscopic quantities in each cell. The serial DSMC code is compared to the present parallel DSMC code, and timing results show that the speedup of the parallel version is approximately linear. The correct physics can be resolved from the results of the complete DSMC method implemented on the connection machine using the data-parallel approach. 41 refs.

  14. Monte Carlo simulations of random non-commutative geometries

    NASA Astrophysics Data System (ADS)

    Barrett, John W.; Glaser, Lisa

    2016-06-01

    Random non-commutative geometries are introduced by integrating over the space of Dirac operators that form a spectral triple with a fixed algebra and Hilbert space. The cases with the simplest types of Clifford algebra are investigated using Monte Carlo simulations to compute the integrals. Various qualitatively different types of behaviour of these random Dirac operators are exhibited. Some features are explained in terms of the theory of random matrices but other phenomena remain mysterious. Some of the models with a quartic action of symmetry-breaking type display a phase transition. Close to the phase transition the spectrum of a typical Dirac operator shows manifold-like behaviour for the eigenvalues below a cut-off scale.

  15. Quantum Monte Carlo study of bilayer ionic Hubbard model

    NASA Astrophysics Data System (ADS)

    Jiang, Mi

    The interaction-driven insulator-to-metal transition has been reported in the ionic Hubbard model (IHM) for intermediate interaction U, which poses fundamental interest in the correlated electronic systems. Here we use determinant quantum Monte Carlo to study the interplay of interlayer hybridization V and two types of intralayer staggered potentials: one with the same (in-phase) and the other with a π-phase shift (anti-phase) potential in two layers termed as ``bilayer ionic Hubbard model''. We demonstrate that the interaction-driven Insulator-Metal transition extends to bilayer IHM with finite V for both types of staggered potentials. Besides, the system with in-phase potential is prone to metallic phase with turning on interlayer hybridization while that with anti-phase potential tends to insulators with stronger charge density order. The author thanks CSCS, Lugano, Switzerland for computing facilities.

  16. Shield weight optimization using Monte Carlo transport calculations

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.; Wohl, M. L.

    1972-01-01

    Outlines are given of the theory used in FASTER-3 Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries. The code has the additional capability of calculating the minimum weight layered unit shield configuration which will meet a specified dose rate constraint. It includes the treatment of geometric regions bounded by quadratic and quardric surfaces with multiple radiation sources which have a specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. Results are presented for sample problems involving primary neutron and both primary and secondary photon transport in a spherical reactor shield configuration. These results include the optimization of the shield configuration.

  17. Continuous-time quantum Monte Carlo using worm sampling

    NASA Astrophysics Data System (ADS)

    Gunacker, P.; Wallerberger, M.; Gull, E.; Hausoel, A.; Sangiovanni, G.; Held, K.

    2015-10-01

    We present a worm sampling method for calculating one- and two-particle Green's functions using continuous-time quantum Monte Carlo simulations in the hybridization expansion (CT-HYB). Instead of measuring Green's functions by removing hybridization lines from partition function configurations, as in conventional CT-HYB, the worm algorithm directly samples the Green's function. We show that worm sampling is necessary to obtain general two-particle Green's functions which are not of density-density type and that it improves the sampling efficiency when approaching the atomic limit. Such two-particle Green's functions are needed to compute off-diagonal elements of susceptibilities and occur in diagrammatic extensions of the dynamical mean-field theory and in efficient estimators for the single-particle self-energy.

  18. Accelerating particle-in-cell simulations using multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ricketson, Lee

    2015-11-01

    Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.

  19. Monte Carlo Simulation Tool Installation and Operation Guide

    SciTech Connect

    Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.

    2013-09-02

    This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.

  20. Monte Carlo simulation of a cobalt-60 beam

    SciTech Connect

    Han, K.; Ballon, D.; Chui, C.; Mohan, R.

    1987-05-01

    We have used the Stanford Electron Gamma Shower (EGS) Monte Carlo code to compute photon spectra from an AECL Theratron 780 cobalt-60 unit. Particular attention has been paid to the careful modeling of the geometry and material construction of the cobalt-60 source capsule, source housing, and collimator assembly. From our simulation, we conclude that the observed increase in output of the machine with increasing field size is caused by scattered photons from the primary definer and the adjustable collimator. We have also used the generated photon spectra as input to a pencil beam model to calculate the tissue--air ratios in water and compared it to a model which uses a monochromatic photon energy of 1.25 MeV.

  1. Monte Carlo modeling of an integrating sphere reflectometer.

    PubMed

    Prokhorov, Alexander V; Mekhontsev, Sergey N; Hanssen, Leonard M

    2003-07-01

    The Monte Carlo method has been applied to numerical modeling of an integrating sphere designed for hemispherical-directional reflectance factor measurements. It is shown that a conventional algorithm of backward ray tracing used for estimation of characteristics of the radiation field at a given point has slow convergence for small source-to-sphere-diameter ratios. A newly developed algorithm that substantially improves the convergence by calculation of direct source-induced irradiation for every point of diffuse reflection of rays traced is described. The method developed is applied to an integrating sphere reflectometer for the visible and infrared spectral ranges. Parametric studies of hemispherical radiance distributions for radiation incident onto the sample center were performed. The deviations of measured sample reflectance from the actual reflectance as a result of various factors were computed. The accuracy of the results, adequacy of the reflectance model, and other important aspects of the algorithm implementation are discussed. PMID:12868822

  2. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  3. 75 FR 53332 - San Carlos Irrigation Project, Arizona

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... Bureau of Reclamation San Carlos Irrigation Project, Arizona AGENCY: Bureau of Reclamation, Interior... of San Carlos Irrigation Project (SCIP) water delivery facilities near the communities of Casa Grande... and Central Arizona Project (CAP) to agricultural lands in the San Carlos Irrigation and...

  4. Minimizing the cost of splitting in Monte Carlo radiation transport simulation

    SciTech Connect

    Juzaitis, R.J.

    1980-10-01

    A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

  5. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  6. Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System

    SciTech Connect

    Karim, Julia Abdul

    2008-05-20

    The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.

  7. Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

    PubMed Central

    Chen, Jin; Fang, Qianqian

    2012-01-01

    Abstract. We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008

  8. A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation

    SciTech Connect

    Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu

    2007-05-01

    A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.

  9. Monte Carlo calculations of the HPGe detector efficiency for radioactivity measurement of large volume environmental samples.

    PubMed

    Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim

    2015-08-01

    A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site. PMID:25982445

  10. Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems

    NASA Astrophysics Data System (ADS)

    Vasiliev, Eugene

    2014-11-01

    Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.

  11. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  12. Topics in computational physics

    NASA Astrophysics Data System (ADS)

    Monville, Maura Edelweiss

    Computational Physics spans a broad range of applied fields extending beyond the border of traditional physics tracks. Demonstrated flexibility and capability to switch to a new project, and pick up the basics of the new field quickly, are among the essential requirements for a computational physicist. In line with the above mentioned prerequisites, my thesis described the development and results of two computational projects belonging to two different applied science areas. The first project is a Materials Science application. It is a prescription for an innovative nano-fabrication technique that is built out of two other known techniques. The preliminary results of the simulation of this novel nano-patterning fabrication method show an average improvement, roughly equal to 18%, with respect to the single techniques it draws on. The second project is a Homeland Security application aimed at preventing smuggling of nuclear material at ports of entry. It is concerned with a simulation of an active material interrogation system based on the analysis of induced photo-nuclear reactions. This project consists of a preliminary evaluation of the photo-fission implementation in the more robust radiation transport Monte Carlo codes, followed by the customization and extension of MCNPX, a Monte Carlo code developed in Los Alamos National Laboratory, and MCNP-PoliMi. The final stage of the project consists of testing the interrogation system against some real world scenarios, for the purpose of determining the system's reliability, material discrimination power, and limitations.

  13. Vectorization of computer programs with applications to computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Gentzsch, W.

    Techniques for adapting serial computer programs to the architecture of modern vector computers are presented and illustrated with examples, mainly from the field of computational fluid dynamics. The limitations of conventional computers are reviewed; the vector computers CRAY-1S and CDC-CYBER 205 are characterized; and chapters are devoted to vectorization of FORTRAN programs, sample-program vectorization on five different vector and parallel-architecture computers, restructuring of basic linear-algebra algorithms, iterative methods, vectorization of simple numerical algorithms, and fluid-dynamics vectorization on CRAY-1 (including an implicit beam and warming scheme, an implicit finite-difference method for laminar boundary-layer equations, the Galerkin method and a direct Monte Carlo simulation). Diagrams, charts, tables, and photographs are provided.

  14. IGMtransmission: Transmission curve computation

    NASA Astrophysics Data System (ADS)

    Harrison, Christopher M.; Meiksin, Avery; Stock, David

    2015-04-01

    IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

  15. Computational Materials Research

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A. (Editor); Gates, Thomas S. (Editor)

    1996-01-01

    Computational Materials aims to model and predict thermodynamic, mechanical, and transport properties of polymer matrix composites. This workshop, the second coordinated by NASA Langley, reports progress in measurements and modeling at a number of length scales: atomic, molecular, nano, and continuum. Assembled here are presentations on quantum calculations for force field development, molecular mechanics of interfaces, molecular weight effects on mechanical properties, molecular dynamics applied to poling of polymers for electrets, Monte Carlo simulation of aromatic thermoplastics, thermal pressure coefficients of liquids, ultrasonic elastic constants, group additivity predictions, bulk constitutive models, and viscoplasticity characterization.

  16. Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling

    SciTech Connect

    Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.

    2008-01-14

    Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1

  17. ISAJET: a Monte Carlo event generator for pp and anti pp interactions. Version 3

    SciTech Connect

    Paige, F.E.; Protopopescu, S.D.

    1982-09-01

    ISAJET is a Monte Carlo computer program which simulates pp and anti pp reactions at high energy. It can generate minimum bias events representative of the total inelastic cross section, high PT hadronic events, and Drell-Yan events with a virtual ..gamma.., W/sup + -/, or Z/sup 0/. It is based on perturbative QCD and phenomeno-logical models for jet fragmentation.

  18. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    SciTech Connect

    Van Siclen, Clinton D

    2007-02-01

    A computationally simple way to accommodate "basins" of trapping states in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  19. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    NASA Astrophysics Data System (ADS)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  20. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  1. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  2. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  3. Quantum Monte Carlo calculations for carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Lähde, Timo A.

    2016-04-01

    We show how lattice quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.

  4. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  5. Fast Lattice Monte Carlo Simulations of Polymers

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Pengfei

    2014-03-01

    The recently proposed fast lattice Monte Carlo (FLMC) simulations (with multiple occupancy of lattice sites (MOLS) and Kronecker δ-function interactions) give much faster/better sampling of configuration space than both off-lattice molecular simulations (with pair-potential calculations) and conventional lattice Monte Carlo simulations (with self- and mutual-avoiding walk and nearest-neighbor interactions) of polymers.[1] Quantitative coarse-graining of polymeric systems can also be performed using lattice models with MOLS.[2] Here we use several model systems, including polymer melts, solutions, blends, as well as confined and/or grafted polymers, to demonstrate the great advantages of FLMC simulations in the study of equilibrium properties of polymers.

  6. Monte-Carlo Opening Books for Amazons

    NASA Astrophysics Data System (ADS)

    Kloetzer, Julien

    Automatically creating opening books is a natural step towards the building of strong game-playing programs, especially when there is little available knowledge about the game. However, while recent popular Monte-Carlo Tree-Search programs showed strong results for various games, we show here that programs based on such methods cannot efficiently use opening books created using algorithms based on minimax. To overcome this issue, we propose to use an MCTS-based technique, Meta-MCTS, to create such opening books. This method, while requiring some tuning to arrive at the best opening book possible, shows promising results to create an opening book for the game of the Amazons, even if this is at the cost of removing its Monte-Carlo part.

  7. Monte Carlo modeling of exospheric bodies - Mercury

    NASA Technical Reports Server (NTRS)

    Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.

    1978-01-01

    In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.

  8. Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Denton, Alan R.; Schmidt, Michael P.

    2013-03-01

    Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.

  9. Lévy-Ciesielski random series as a useful platform for Monte Carlo path integral sampling.

    PubMed

    Predescu, Cristian

    2005-04-01

    We demonstrate that the Lévy-Ciesielski implementation of Lie-Trotter products enjoys several properties that make it extremely suitable for path-integral Monte Carlo simulations: fast computation of paths, fast Monte Carlo sampling, and the ability to use different numbers of time slices for the different degrees of freedom, commensurate with the quantum effects. It is demonstrated that a Monte Carlo simulation for which particles or small groups of variables are updated in a sequential fashion has a statistical efficiency that is always comparable to or better than that of an all-particle or all-variable update sampler. The sequential sampler results in significant computational savings if updating a variable costs only a fraction of the cost for updating all variables simultaneously or if the variables are independent. In the Lévy-Ciesielski representation, the path variables are grouped in a small number of layers, with the variables from the same layer being statistically independent. The superior performance of the fast sampling algorithm is shown to be a consequence of these observations. Both mathematical arguments and numerical simulations are employed in order to quantify the computational advantages of the sequential sampler, the Lévy-Ciesielski implementation of path integrals, and the fast sampling algorithm. PMID:15903818

  10. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  11. Monte Carlo simulation of Alaska wolf survival

    NASA Astrophysics Data System (ADS)

    Feingold, S. J.

    1996-02-01

    Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

  12. Obituary: Wayne Carlos Hendrickson, 1953-2007

    NASA Astrophysics Data System (ADS)

    Trimble, Virginia

    2007-12-01

    Wayne Carlos Hendrickson was born on born 4 December 1953 in Alhambra, California. He earned a BA/BS in Physics from the University of California at Irvine circa 1975. His PhD in astrophysics was earned in 1984 at the University of Texas at Austin. Hendrickson worked at Raytheon Corporation, for most of the rest of his life, on classified research. He died 8 August 2007.

  13. Linear-scaling quantum Monte Carlo calculations.

    PubMed

    Williamson, A J; Hood, R Q; Grossman, J C

    2001-12-10

    A method is presented for using truncated, maximally localized Wannier functions to introduce sparsity into the Slater determinant part of the trial wave function in quantum Monte Carlo calculations. When combined with an efficient numerical evaluation of these localized orbitals, the dominant cost in the calculation, namely, the evaluation of the Slater determinant, scales linearly with system size. This technique is applied to accurate total energy calculation of hydrogenated silicon clusters and carbon fullerenes containing 20-1000 valence electrons. PMID:11736525

  14. Carlos Castillo-Chavez: a century ahead.

    PubMed

    Schatz, James

    2013-01-01

    When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996. PMID:24245638

  15. Numerical reproducibility for implicit Monte Carlo simulations

    SciTech Connect

    Cleveland, M.; Brunner, T.; Gentile, N.

    2013-07-01

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)

  16. jTracker and Monte Carlo Comparison

    NASA Astrophysics Data System (ADS)

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  17. Monte Carlo dose mapping on deforming anatomy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  18. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  19. Comparison of I-131 Radioimmunotherapy Tumor Dosimetry: Unit Density Sphere Model Versus Patient-Specific Monte Carlo Calculations

    PubMed Central

    Howard, David M.; Kearfott, Kimberlee J.; Wilderman, Scott J.

    2011-01-01

    Abstract High computational requirements restrict the use of Monte Carlo algorithms for dose estimation in a clinical setting, despite the fact that they are considered more accurate than traditional methods. The goal of this study was to compare mean tumor absorbed dose estimates using the unit density sphere model incorporated in OLINDA with previously reported dose estimates from Monte Carlo simulations using the dose planning method (DPMMC) particle transport algorithm. The dataset (57 tumors, 19 lymphoma patients who underwent SPECT/CT imaging during I-131 radioimmunotherapy) included tumors of varying size, shape, and contrast. OLINDA calculations were first carried out using the baseline tumor volume and residence time from SPECT/CT imaging during 6 days post-tracer and 8 days post-therapy. Next, the OLINDA calculation was split over multiple time periods and summed to get the total dose, which accounted for the changes in tumor size. Results from the second calculation were compared with results determined by coupling SPECT/CT images with DPM Monte Carlo algorithms. Results from the OLINDA calculation accounting for changes in tumor size were almost always higher (median 22%, range −1%–68%) than the results from OLINDA using the baseline tumor volume because of tumor shrinkage. There was good agreement (median −5%, range −13%–2%) between the OLINDA results and the self-dose component from Monte Carlo calculations, indicating that tumor shape effects are a minor source of error when using the sphere model. However, because the sphere model ignores cross-irradiation, the OLINDA calculation significantly underestimated (median 14%, range 2%–31%) the total tumor absorbed dose compared with Monte Carlo. These results show that when the quantity of interest is the mean tumor absorbed dose, the unit density sphere model is a practical alternative to Monte Carlo for some applications. For applications requiring higher accuracy, computer-intensive Monte

  20. Computers and Computer Resources.

    ERIC Educational Resources Information Center

    Bitter, Gary

    1980-01-01

    This resource directory provides brief evaluative descriptions of six popular home computers and lists selected sources of educational software, computer books, and magazines. For a related article on microcomputers in the schools, see p53-58 of this journal issue. (SJL)

  1. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    SciTech Connect

    Densmore, Jeffery D. . E-mail: jdd@lanl.gov; Urbatsch, Todd J. . E-mail: tmonster@lanl.gov; Evans, Thomas M. . E-mail: tme@lanl.gov; Buksas, Michael W. . E-mail: mwbuksas@lanl.gov

    2007-03-20

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  2. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    NASA Astrophysics Data System (ADS)

    He, Tongming Tony

    In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based

  3. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations. PMID:16383566

  4. Use of MOSFET dosimeters to validate Monte Carlo radiation treatment calculation in an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Juste, Belén; Miró, R.; Abella, V.; Santos, A.; Verdú, Gumersindo

    2015-11-01

    Radiation therapy treatment planning based on Monte Carlo simulation provide a very accurate dose calculation compared to deterministic systems. Nowadays, Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy to verify the received dose by patients. In the present work, we have used the MCNP6 (Monte Carlo N-Particle transport code) to simulate the irradiation of an anthropomorphic phantom (RANDO) with a medical linear accelerator. The detailed model of the Elekta Precise multileaf collimator using a 6 MeV photon beam was designed and validated by means of different beam sizes and shapes in previous works. To include in the simulation the RANDO phantom geometry a set of Computer Tomography images of the phantom was obtained and formatted. The slices are input in PLUNC software, which performs the segmentation by defining anatomical structures and a Matlab algorithm writes the phantom information in MCNP6 input deck format. The simulation was verified and therefore the phantom model and irradiation was validated throughout the comparison of High-Sensitivity MOSFET dosimeter (Best medical Canada) measurements in different points inside the phantom with simulation results. On-line Wireless MOSFET provide dose estimation in the extremely thin sensitive volume, so a meticulous and accurate validation has been performed. The comparison show good agreement between the MOSFET measurements and the Monte Carlo calculations, confirming the validity of the developed procedure to include patients CT in simulations and approving the use of Monte Carlo simulations as an accurate therapy treatment plan.

  5. A configuration space Monte Carlo algorithm for solving the nuclear pairing problem

    NASA Astrophysics Data System (ADS)

    Lingle, Mark

    Nuclear pairing correlations using Quantum Monte Carlo are studied in this dissertation. We start by defining the nuclear pairing problem and discussing several historical methods developed to solve this problem, paying special attention to the applicability of such methods. A numerical example discussing pairing correlations in several calcium isotopes using the BCS and Exact Pairing solutions are presented. The ground state energies, correlation energies, and occupation numbers are compared to determine the applicability of each approach to realistic cases. Next we discuss some generalities related to the theory of Markov Chains and Quantum Monte Carlo in regards to nuclear structure. Finally we present our configuration space Monte Carlo algorithm starting from a discussion of a path integral approach by the authors. Some general features of the Pairing Hamiltonian that boost the effectiveness of a configuration space Monte Carlo approach are mentioned. The full details of our method are presented and special attention is paid to convergence and error control. We present a series of examples illustrating the effectiveness of our approach. These include situations with non-constant pairing strengths, limits when pairing correlations are weak, the computation of excited states, and problems when the relevant configuration space is large. We conclude with a chapter examining some of the effects of continuum states in 24O.

  6. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  7. Marshall Rosenbluth and the Beginning of Monte Carlo Simulations for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Gubernatis, James E.

    2004-11-01

    The 1953 publication, ``Equation of State Calculations by Very Fast Computing Machines'' by Nick Metropolis, Arianna and Marshall Rosenbluth, and Mici and Edward Teller [1], marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely use Monte Carlo algorithm ever published. As none of the authors made subsequent used of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. In what is likely his last publication, Marshall Rosenbluth gave his recollections of the algorithm's development [2], the first recollection of the algorithm's development ever recorded, and laid claim to what perhaps should have been called the Rosenbluth algorithm. I will describe the algorithm, reconstruct the historical context in which it was developed, summarize Marshall's recollections, and share his parting challenges to those doing Monte Carlo simulations. [1] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953). [2] M. N. Rosenbluth, in The Monte Carlo Method in the Physical Sciences, edited by J. E. Gubernatis (American Institute of Physics, New York, 2003), p. 22.

  8. Validation of the Monte Carlo simulator GATE for indium-111 imaging.

    PubMed

    Assié, K; Gardin, I; Véra, P; Buvat, I

    2005-07-01

    Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions. PMID:15972984

  9. Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Pawlicki, T.; Jiang, S. B.; Li, J. S.; Deng, J.; Mok, E.; Kapur, A.; Xing, L.; Ma, L.; Boyer, A. L.

    2000-09-01

    The purpose of this work was to use Monte Carlo simulations to verify the accuracy of the dose distributions from a commercial treatment planning optimization system (Corvus, Nomos Corp., Sewickley, PA) for intensity-modulated radiotherapy (IMRT). A Monte Carlo treatment planning system has been implemented clinically to improve and verify the accuracy of radiotherapy dose calculations. Further modifications to the system were made to compute the dose in a patient for multiple fixed-gantry IMRT fields. The dose distributions in the experimental phantoms and in the patients were calculated and used to verify the optimized treatment plans generated by the Corvus system. The Monte Carlo calculated IMRT dose distributions agreed with the measurements to within 2% of the maximum dose for all the beam energies and field sizes for both the homogeneous and heterogeneous phantoms. The dose distributions predicted by the Corvus system, which employs a finite-size pencil beam (FSPB) algorithm, agreed with the Monte Carlo simulations and measurements to within 4% in a cylindrical water phantom with various hypothetical target shapes. Discrepancies of more than 5% (relative to the prescribed target dose) in the target region and over 20% in the critical structures were found in some IMRT patient calculations. The FSPB algorithm as implemented in the Corvus system is adequate for homogeneous phantoms (such as prostate) but may result in significant under- or over-estimation of the dose in some cases involving heterogeneities such as the air-tissue, lung-tissue and tissue-bone interfaces.

  10. A Hamiltonian Monte-Carlo method for Bayesian inference of supermassive black hole binaries

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.; Carré, Jérôme

    2014-07-01

    We investigate the use of a Hamiltonian Monte-Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte-Carlo (MCMC) methods, such as Metropolis-Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte-Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a {{10}^{6}} iteration chain from \\sim {{10}^{9}} to \\sim {{10}^{6}}. The result is in an implementation of the Hamiltonian Monte-Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC.

  11. Accelerating Monte Carlo Markov chains with proxy and error models

    NASA Astrophysics Data System (ADS)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  12. Computational strategy for the crash design analysis using an uncertain computational mechanical model

    NASA Astrophysics Data System (ADS)

    Desceliers, C.; Soize, C.; Zarroug, M.

    2013-08-01

    The framework of this paper is the robust crash analysis of a motor vehicle. The crash analysis is carried out with an uncertain computational model for which uncertainties are taken into account with the parametric probabilistic approach and for which the stochastic solver is the Monte Carlo method. During the design process, different configurations of the motor vehicle are analyzed. Usual interpolation methods cannot be used to predict if the current configuration is similar or not to one of the previous configurations already analyzed and for which a complete stochastic computation has been carried out. In this paper, we propose a new indicator that allows to decide if the current configuration is similar to one of the previous analyzed configurations while the Monte Carlo simulation is not finished and therefore, to stop the Monte Carlo simulation before the end of computation.

  13. The macro response Monte Carlo method for electron transport

    NASA Astrophysics Data System (ADS)

    Svatos, Michelle Marie

    1998-10-01

    This thesis proves the feasibility of basing depth dose calculations for electron radiotherapy on first- principles single scatter physics, in an amount of time that is comparable to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. This is possible because MRMC is a Local-to- Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or 'kugel'. A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV-8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry, which in this case is a CT (computed tomography) scan of a patient or phantom. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code against EGS4 and MCNP for depth dose in simple phantoms having density inhomogeneities. The energy deposition algorithms for spreading dose across 5-10 zones per kugel were tested. Most resulting depth dose calculations were within 2-3% of well-benchmarked codes, with one excursion to 4%. This thesis shows that the concept of using single scatter-based physics in clinical radiation

  14. Relative performances of several scientific computers for a liquid molecular dynamics simulation. [Computers tested are: VAX 11/70, CDC 7600, CRAY-1, CRAY-1*, VAX-FPSAP

    SciTech Connect

    Ceperley, D.M.

    1980-08-01

    Some of the computational characteristics of simulations and the author's experience in using his standard simulation program called CLAMPS on several scientific computers are discussed. CLAMPS is capable of performing Metropolis Monte Carlo and Molecular Dynamics simulations of arbitrary mixtures of single atoms. The computational characteristics of simulations and what makes a good simulation computer are also summarized.

  15. Three-dimensional random earth atmospheres for Monte Carlo trajectory analyses

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.

    1977-01-01

    A set of four computer tapes containing random three dimensional Earth atmospheres is available for Monte Carlo trajectory analyses. The tapes contain sufficient atmospheric tables to allow replications of any trajectory below an altitude of 99 km. The atmospheres were provided by an empirical model designed to generate random atmospheres whose distributions match those in a data base of sounding rocket measurements. A readily implementable means of linking the tapes to any existing trajectory simulation computer program is described involving the addition of three subroutines which are listed in an appendix.

  16. Probability of initiation and extinction in the Mercury Monte Carlo code

    SciTech Connect

    McKinley, M. S.; Brantley, P. S.

    2013-07-01

    A Monte Carlo method for computing the probability of initiation has previously been implemented in Mercury. Recently, a new method based on the probability of extinction has been implemented as well. The methods have similarities from counting progeny to cycling in time, but they also have differences such as population control and statistical uncertainty reporting. The two methods agree very well for several test problems. Since each method has advantages and disadvantages, we currently recommend that both methods are used to compute the probability of criticality. (authors)

  17. Efficient Monte Carlo simulations using a shuffled nested Weyl sequence random number generator.

    PubMed

    Tretiakov, K V; Wojciechowski, K W

    1999-12-01

    The pseudorandom number generator proposed recently by Holian et al. [B. L. Holian, O. E. Percus, T. T. Warnock, and P. A. Whitlock, Phys. Rev. E 50, 1607 (1994)] is tested via Monte Carlo computation of the free energy difference between the defectless hcp and fcc hard sphere crystals by the Frenkel-Ladd method [D. Frenkel and A. J. C. Ladd, J. Chem. Phys. 81, 3188 (1984)]. It is shown that this fast and convenient for parallel computing generator gives results in good agreement with results obtained by other generators. An estimate of high accuracy is obtained for the hcp-fcc free energy difference near melting. PMID:11970727

  18. 0.234: The Myth of a Universal Acceptance Ratio for Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Potter, Christopher C. J.; Swendsen, Robert H.

    Two well-known papers by Gelman, Roberts, and Gilks have proposed the application of the results of an interesting mathematical proof to practical optimizations of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234. In this paper, we point out that although the proof is valid, its significance is questionable, and its application to practical computations is not advisable. The simulation algorithm considered in the proof is very inefficient and produces poor results under all circumstances.

  19. New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code

    SciTech Connect

    Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O'Brien, M J; Taylor, J M

    2007-03-08

    The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.

  20. Modeling granular phosphor screens by Monte Carlo methods

    SciTech Connect

    Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.

    2006-12-15

    The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd{sub 2}O{sub 2}S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd{sub 2}O{sub 2}S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd{sub 2}O{sub 2}S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)