Lunar Regolith Albedos Using Monte Carlos
NASA Technical Reports Server (NTRS)
Wilson, T. L.; Andersen, V.; Pinsky, L. S.
2003-01-01
The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.
Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration
NASA Technical Reports Server (NTRS)
Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; Gadoli, E.; Gazelli, M. V.; LeBourgeois, M.; Lee, K. T.; Mayes, B.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L. S.; Rancati, T.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannocchio, D.; Smirnov, G.
2004-01-01
NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Monte Carlo simulation of turnover processes in the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
A Monte Carlo model for the gardening of the lunar surface by meteoritic impact is described, and some representative results are given. The model accounts with reasonable success for a wide variety of properties of the regolith. The smoothness of the lunar surface on a scale of centimeters to meters, which was not reproduced in an earlier version of the model, is accounted for by the preferential downward movement of low-energy secondary particles. The time scale for filling lunar grooves and craters by this process is also derived. The experimental bombardment ages (about 4 x 10 to the 8th yr for spallogenic rare gases, about 10 to the 9th yr for neutron capture Gd and Sm isotopes) are not reproduced by the model. The explanation is not obvious.
SPQR: a Monte Carlo reactor kinetics code. [LMFBR
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.
An Overview of the Monte Carlo Methods, Codes, & Applications Group
Trahan, Travis John
2016-08-30
This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.
The Monte Carlo code MCPTV--Monte Carlo dose calculation in radiation therapy with carbon ions.
Karg, Juergen; Speer, Stefan; Schmidt, Manfred; Mueller, Reinhold
2010-07-07
The Monte Carlo code MCPTV is presented. MCPTV is designed for dose calculation in treatment planning in radiation therapy with particles and especially carbon ions. MCPTV has a voxel-based concept and can perform a fast calculation of the dose distribution on patient CT data. Material and density information from CT are taken into account. Electromagnetic and nuclear interactions are implemented. Furthermore the algorithm gives information about the particle spectra and the energy deposition in each voxel. This can be used to calculate the relative biological effectiveness (RBE) for each voxel. Depth dose distributions are compared to experimental data giving good agreement. A clinical example is shown to demonstrate the capabilities of the MCPTV dose calculation.
Monte Carlo Model Insights into the Lunar Sodium Exosphere
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Killen, R. M.; Sarantos, M.
2012-01-01
Sodium in the lunar exosphere is released from the lunar regolith by several mechanisms. These mechanisms include photon stimulated desorption (PSD), impact vaporization, electron stimulated desorption, and ion sputtering. Usually, PSD dominates; however, transient events can temporarily enhance other release mechanisms so that they are dominant. Examples of transient events include meteor showers and coronal mass ejections. The interaction between sodium and the regolith is important in determining the density and spatial distribution of sodium in the lunar exosphere. The temperature at which sodium sticks to the surface is one factor. In addition, the amount of thermal accommodation during the encounter between the sodium atom and the surface affects the exospheric distribution. Finally, the fraction of particles that are stuck when the surface is cold that are rereleased when the surface warms up also affects the exospheric density. In [1], we showed the "ambient" sodium exosphere from Monte Carlo modeling with a fixed source rate and fixed surface interaction parameters. We compared the enhancement when a CME passes the Moon to the ambient conditions. Here, we compare model results to data in order to determine the source rates and surface interaction parameters that provide the best fit of the model to the data.
Progress on coupling UEDGE and Monte-Carlo simulation codes
Rensink, M.E.; Rognlien, T.D.
1996-08-28
Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.
Improved diffusion coefficients generated from Monte Carlo codes
Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.
2013-07-01
Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
BEAM: a Monte Carlo code to simulate radiotherapy treatment units.
Rogers, D W; Faddegon, B A; Ding, G X; Ma, C M; We, J; Mackie, T R
1995-05-01
This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Application de codes Monte Carlo en neutronthérapie
NASA Astrophysics Data System (ADS)
Paquis, P.; Pignol, J. P.; Cuendet, P.; Fares, G.; Diop, C.; Iborra, N.; Hachem, A.; Mokhtari, F.; Karamanoukian, D.
1998-04-01
Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. Les codes de calculs Monte Carlo permettent au travers de la simulation des trajectoires de particules dans la matière d',étudier finement tous les paramètres d'une irradiation comme la dose ou les interactions avec les atomes du milieu. Ces caractéristiques sont particulièrement utiles pour les irradiations par neutrons, depuis le développement des machines jusqu'à? la dosimétrie. Les applications de ces codes sont illustrées pour la Thérapie par Captures de neutrons, les Irradiations par Neutrons Rapides, et la Potentialisation par Capture de Neutrons.
Radiographic Capabilities of the MERCURY Monte Carlo Code
McKinley, M S; von Wittenau, A
2008-04-07
MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. Recently, a radiographic capability has been added. MERCURY can create a source of diagnostic, virtual particles that are aimed at pixels in an image tally. This new feature is compared to the radiography code, HADES, for verification and timing. Comparisons for accuracy were made using the French Test Object and for timing were made by tracking through an unstructured mesh. In addition, self consistency tests were run in MERCURY for the British Test Object and scattering test problem. MERCURY and HADES were found to agree to the precision of the input data. HADES appears to run around eight times faster than the MERCURY in the timing study. Profiling the MERCURY code has turned up several differences in the algorithms which account for this. These differences will be addressed in a future release of MERCURY.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
The Monte Carlo code MCSHAPE: Main features and recent developments
NASA Astrophysics Data System (ADS)
Scot, Viviana; Fernandez, Jorge E.
2015-06-01
MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
Generation of SFR few-group constants using the Monte Carlo code Serpent
Fridman, E.; Rachamin, R.; Shwageraus, E.
2013-07-01
In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Monte Carlo Capabilities of the SCALE Code System
NASA Astrophysics Data System (ADS)
Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.
2014-06-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Multiparticle Monte Carlo Code System for Shielding and Criticality Use.
2015-06-01
Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computing Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02). New
NASA Technical Reports Server (NTRS)
Everson, John; Nelson, H. F.
1993-01-01
A reverse Monte Carlo radiative transfer code to predict rocket plume base heating is presented. In this technique rays representing the radiation propagation are traced backwards in time from the receiving surface to the point of emission in the plume. This increases the computational efficiency relative to the forward Monte Carlo technique when calculating the radiation reaching a specific point, as only the rays that strike the receiving point are considered.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, T.J.
1998-12-01
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
Bartel, Timothy J.
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
NASA Astrophysics Data System (ADS)
Murray, J.; SU, J. J.; Sagdeev, R.; Chin, G.
2014-12-01
Introduction:Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the composition of the lunar soil [1-3]. Orbital measurements of lunar neutron flux have been made by the Lunar Prospector Neutron Spectrometer (LPNS)[4] of the Lunar Prospector mission and the Lunar Exploration Neutron Detector (LEND)[5] of the Lunar Reconnaissance Orbiter mission. While both are cylindrical helium-3 detectors, LEND's SETN (Sensor EpiThermal Neutrons) instrument is shorter, with double the helium-3 pressure than that of LPNS. The two instruments therefore have different angular sensitivities and neutron detection efficiencies. Furthermore, the Lunar Prospector's spin-stabilized design makes its detection efficiency latitude-dependent, while the SETN instrument faces permanently downward toward the lunar surface. We use the GEANT4 Monte Carlo simulation code[6] to investigate the leakage lunar neutron energy spectrum, which follows a power law of the form E-0.9 in the epithermal energy range, and the signals detected by LPNS and SETN in the LP and LRO mission epochs, respectively. Using the lunar neutron flux reconstructed for LPNS epoch, we calculate the signal that would have been observed by SETN at that time. The subsequent deviation from the actual signal observed during the LEND epoch is due to the significantly higher intensity of Galactic Cosmic Rays during the anomalous Solar Minimum of 2009-2010. References: [1] W. C. Feldman, et al., (1998) Science Vol. 281 no. 5382 pp. 1496-1500. [2] Gasnault, O., et al.,(2000) J. Geophys. Res., 105(E2), 4263-4271. [3] Little, R. C., et al. (2003), J. Geophys. Res., 108(E5), 5046. [4]W. C. Feldman, et al., (1999) Nucl. Inst. And Method in Phys. Res. A 422, [5] M. L. Litvak, et al., (2012) J.Geophys. Res. 117, E00H32 [6] J. Allison, et al, (2006) IEEE Trans. on Nucl Sci, Vol 53, No 1.
T.J. Urbatsch; T.M. Evans
2006-02-15
We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo
NASA Astrophysics Data System (ADS)
Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor
2005-06-01
New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
Kirk, B.L.; West, J.T.
1984-06-01
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided.
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Large scale cratering of the lunar highlands - Some Monte Carlo model considerations
NASA Technical Reports Server (NTRS)
Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.
1976-01-01
In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O'Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
Force field development with GOMC, a fast new Monte Carlo molecular simulation code
NASA Astrophysics Data System (ADS)
Mick, Jason Richard
In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.
Multi-core performance studies of a Monte Carlo neutron transport code
Siegel, A. R.; Smith, K.; Romano, P. K.; Forget, B.; Felker, K. G.
2013-07-14
Performance results are presented for a multi-threaded version of the OpenMC Monte Carlo neutronics code using OpenMP in the context of nuclear reactor criticality calculations. Our main interest is production computing, and thus we limit our approach to threading strategies that both require reasonable levels of development effort and preserve the code features necessary for robust application to real-world reactor problems. Several approaches are developed and the results compared on several multi-core platforms using a popular reactor physics benchmark. A broad range of performance studies are distilled into a simple, consistent picture of the empirical performance characteristics of reactor Monte Carlo algorithms on current multi-core architectures.
Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
The Serpent Monte Carlo Code: Status, Development and Applications in 2013
NASA Astrophysics Data System (ADS)
Leppänen, Jaakko; Pusa, Maria; Viitanen, Tuomas; Valtavirta, Ville; Kaltiaisenaho, Toni
2014-06-01
The Serpent Monte Carlo reactor physics burnup calculation code has been developed at VTT Technical Research Centre of Finland since 2004, and is currently used in 100 universities and research organizations around the world. This paper presents the brief history of the project, together with the currently available methods and capabilities and plans for future work. Typical user applications are introduced in the form of a summary review on Serpent-related publications over the past few years.
A User’s Manual for MASH 1.0 - A Monte Carlo Adjoint Shielding Code System
1992-03-01
INTRODUCTION TO MORSE The Multigroup Oak Ridge Stochastic Experiment code (MORSE)’ is a multipurpose neutron and gamma-ray transport Monte Carlo code...in the energy transfer process. Thus, these multigroup cross sections have the same format for both neutrons and gamma rays. In addition, the... multigroup cross sections in a Monte Carlo code means that the effort required to produce cross-section libraries is reduced. Coupled neutron gamma-ray cross
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes
NASA Astrophysics Data System (ADS)
André, T.; Morini, F.; Karamitros, M.; Delorme, R.; Le Loirec, C.; Campos, L.; Champion, C.; Groetz, J.-E.; Fromm, M.; Bordage, M.-C.; Perrot, Y.; Barberet, Ph.; Bernal, M. A.; Brown, J. M. C.; Deleuze, M. S.; Francis, Z.; Ivanchenko, V.; Mascialino, B.; Zacharatou, C.; Bardiès, M.; Incerti, S.
2014-01-01
Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov-Smirnov test has allowed confirming the statistical compatibility of all simulation results.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code
Zheng, S.H.; Vergnaud, T.; Nimal, J.C.
1998-03-01
Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
NASA Astrophysics Data System (ADS)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway.
Dosimetric characterization of an 192Ir brachytherapy source with the Monte Carlo code PENELOPE.
Casado, Francisco Javier; García-Pareja, Salvador; Cenizo, Elena; Mateo, Beatriz; Bodineau, Coral; Galán, Pedro
2010-01-01
Monte Carlo calculations are highly spread and settled practice to calculate brachytherapy sources dosimetric parameters. In this study, recommendations of the AAPM TG-43U1 report have been followed to characterize the Varisource VS2000 (192)Ir high dose rate source, provided by Varian Oncology Systems. In order to obtain dosimetric parameters for this source, Monte Carlo calculations with PENELOPE code have been carried out. TG-43 formalism parameters have been presented, i.e., air kerma strength, dose rate constant, radial dose function and anisotropy function. Besides, a 2D Cartesian coordinates dose rate in water table has been calculated. These quantities are compared to this source reference data, finding results in good agreement with them. The data in the present study complement published data in the next aspects: (i) TG-43U1 recommendations are followed regarding to phantom ambient conditions and to uncertainty analysis, including statistical (type A) and systematic (type B) contributions; (ii) PENELOPE code is benchmarked for this source; (iii) Monte Carlo calculation methodology differs from that usually published in the way to estimate absorbed dose, leaving out the track-length estimator; (iv) the results of the present work comply with the most recent AAPM and ESTRO physics committee recommendations about Monte Carlo techniques, in regards to dose rate uncertainty values and established differences between our results and reference data. The results stated in this paper provide a complete parameter collection, which can be used for dosimetric calculations as well as a means of comparison with other datasets from this source.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
A Monte Carlo model for the gardening of the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
The processes of movement and turnover of the lunar regolith are described by a Monte Carlo model. The movement of material by the direct cratering process is the dominant mode, but slumping is also included for angles exceeding the static angle of repose. Using a group of interrelated computer programs, a large number of properties are calculated, including topography, formation of layers, depth of the disturbed layer, nuclear-track distributions, and cosmogenic nuclides. In the most complex program, the history of a 36-point square array is followed for times up to 400 million years. The histories generated are complex and exhibit great variety. Because a crater covers much less area than its ejecta blanket, there is a tendency for the height change at a test point to exhibit periods of slow accumulation followed by sudden excavation. In general, the agreement with experiment and observation seems good, but two areas of disagreement stand out. First, the calculated surface is rougher than that observed. Second, the observed bombardment ages, of the order 400 million are shorter than expected (by perhaps a factor of 5).
Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
NASA Astrophysics Data System (ADS)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O'Brien, M J; Taylor, J M
2007-03-08
The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.
Development of a dynamic simulation mode in Serpent 2 Monte Carlo code
Leppaenen, J.
2013-07-01
This paper presents a dynamic neutron transport mode, currently being implemented in the Serpent 2 Monte Carlo code for the purpose of simulating short reactivity transients with temperature feedback. The transport routine is introduced and validated by comparison to MCNP5 calculations. The method is also tested in combination with an internal temperature feedback module, which forms the inner part of a multi-physics coupling scheme in Serpent 2. The demo case for the coupled calculation is a reactivity-initiated accident (RIA) in PWR fuel. (authors)
NASA Astrophysics Data System (ADS)
Chetty, Indrin J.; Moran, Jean M.; Nurushev, Teamor S.; McShan, Daniel L.; Fraass, Benedick A.; Wilderman, Scott J.; Bielajew, Alex F.
2002-06-01
A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for electron beam dose calculations in heterogeneous media. Measurements were made using 10 MeV and 50 MeV minimally scattered, uncollimated electron beams from a racetrack microtron. Source distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber scans and then benchmarked against measurements in a homogeneous water phantom. The in-air spatial distributions were found to have FWHM of 4.7 cm and 1.3 cm, at 100 cm from the source, for the 10 MeV and 50 MeV beams respectively. Energy spectra for the electron beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. Profile measurements were made using an ion chamber in a water phantom with slabs of lung or bone-equivalent materials submerged at various depths. DPM calculations are, on average, within 2% agreement with measurement for all geometries except for the 50 MeV incident on a 6 cm lung-equivalent slab. Measurements using approximately monoenergetic, 50 MeV, 'pencil-beam'-type electrons in heterogeneous media provide conditions for maximum electronic disequilibrium and hence present a stringent test of the code's electron transport physics; the agreement noted between calculation and measurement illustrates that the DPM code is capable of accurate dose calculation even under such conditions.
Chetty, Indrin J; Moran, Jean M; Nurushev, Teamor S; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F
2002-06-07
A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for electron beam dose calculations in heterogeneous media. Measurements were made using 10 MeV and 50 MeV minimally scattered, uncollimated electron beams from a racetrack microtron. Source distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber scans and then benchmarked against measurements in a homogeneous water phantom. The in-air spatial distributions were found to have FWHM of 4.7 cm and 1.3 cm, at 100 cm from the source, for the 10 MeV and 50 MeV beams respectively. Energy spectra for the electron beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. Profile measurements were made using an ion chamber in a water phantom with slabs of lung or bone-equivalent materials submerged at various depths. DPM calculations are, on average, within 2% agreement with measurement for all geometries except for the 50 MeV incident on a 6 cm lung-equivalent slab. Measurements using approximately monoenergetic, 50 MeV, 'pencil-beam'-type electrons in heterogeneous media provide conditions for maximum electronic disequilibrium and hence present a stringent test of the code's electron transport physics; the agreement noted between calculation and measurement illustrates that the DPM code is capable of accurate dose calculation even under such conditions.
Development and validation of ALEPH2 Monte Carlo burn-up code
Van Den Eynde, G.; Stankovskiy, A.; Fiorito, L.; Broustaut, M.
2013-07-01
The ALEPH2 Monte Carlo depletion code has two principal features that make it a flexible and powerful tool for reactor analysis. First of all, its comprehensive nuclear data library ensures the consistency between steady-state Monte Carlo and deterministic depletion modules. It covers neutron and proton induced reactions, neutron and proton fission product yields, spontaneous fission product yields, radioactive decay data and total recoverable energies per fission. Secondly, ALEPH2 uses an advanced numerical solver for the first order ordinary differential equations describing the isotope balances, namely a Radau IIA implicit Runge-Kutta method. The versatility of the code allows using it for time behavior simulation of various systems ranging from single pin model to full-scale reactor model. The code is extensively used for the neutronics design of the MYRRHA research fast spectrum facility which will operate in both critical and sub-critical modes. The code has been validated on the decay heat data from JOYO experimental fast reactor. (authors)
NASA Astrophysics Data System (ADS)
Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.
2015-11-01
In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.
NASA Astrophysics Data System (ADS)
Shchurovskaya, M. V.; Alferov, V. P.; Geraskin, N. I.; Radaev, A. I.
2017-01-01
The results of the validation of a research reactor calculation using Monte Carlo and deterministic codes against experimental data and based on code-to-code comparison are presented. The continuous energy Monte Carlo code MCU-PTR and the nodal diffusion-based deterministic code TIGRIS were used for full 3-D calculation of the IRT MEPhI research reactor. The validation included the investigations for the reactor with existing high enriched uranium (HEU, 90 w/o) fuel and low enriched uranium (LEU, 19.7 w/o, U-9%Mo) fuel.
Randeniya, S. D.; Taddei, P. J.; Newhauser, W. D.; Yepes, P.
2010-01-01
Monte Carlo simulations of an ocular treatment beam-line consisting of a nozzle and a water phantom were carried out using MCNPX, GEANT4, and FLUKA to compare the dosimetric accuracy and the simulation efficiency of the codes. Simulated central axis percent depth-dose profiles and cross-field dose profiles were compared with experimentally measured data for the comparison. Simulation speed was evaluated by comparing the number of proton histories simulated per second using each code. The results indicate that all the Monte Carlo transport codes calculate sufficiently accurate proton dose distributions in the eye and that the FLUKA transport code has the highest simulation efficiency. PMID:20865141
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
FILGES, DETLEF
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domain of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.
Uncertainties associated with the use of the KENO Monte Carlo criticality codes
Landers, N.F.; Petrie, L.M. )
1989-01-01
The KENO multi-group Monte Carlo criticality codes have earned the reputation of being efficient, user friendly tools especially suited for the analysis of situations commonly encountered in the storage and transportation of fissile materials. Throughout their twenty years of service, a continuing effort has been made to maintain and improve these codes to meet the needs of the nuclear criticality safety community. Foremost among these needs is the knowledge of how to utilize the results safely and effectively. Therefore it is important that code users be aware of uncertainties that may affect their results. These uncertainties originate from approximations in the problem data, methods used to process cross sections, and assumptions, limitations and approximations within the criticality computer code itself. 6 refs., 8 figs., 1 tab.
Spread-out Bragg peak and monitor units calculation with the Monte Carlo code MCNPX.
Hérault, J; Iborra, N; Serrano, B; Chauvel, P
2007-02-01
The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation.
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code
Trumbull, T. H.
2013-07-01
With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)
Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.
Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W
1998-05-01
The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Gas bremsstrahlung studies for medium energy electron storage rings using FLUKA Monte Carlo code
NASA Astrophysics Data System (ADS)
Sahani, Prasanta Kumar; Haridas, G.; Sinha, Anil K.; Hannurkar, P. R.
2016-02-01
Gas bremsstrahlung is generated due to the interaction of the stored electron beam with residual gas molecules of the vacuum chamber in a storage ring. As the opening angle of the bremsstrahlung is very small, the scoring area used in Monte Carlo simulation plays a dominant role in evaluating the absorbed dose. In the present work gas bremsstrahlung angular distribution and absorbed dose for the energies ranging from 1 to 5 GeV electron storage rings are studied using the Monte Carlo code, FLUKA. From the study, an empirical formula for gas bremsstrahlung dose estimation was deduced. The results were compared with the data obtained from reported experimental values. The results obtained from simulations are found to be in very good agreement with the reported experimental data. The results obtained are applied in estimating the gas bremsstrahlung dose for 2.5 GeV synchrotron radiation source, Indus-2 at Raja Ramanna Centre for Advanced Technology, India. The paper discusses the details of the simulation and the results obtained.
Implementation of the direct S(α,β) method in the KENO Monte Carlo code
Hart, Shane W. D.; Maldonado, G. Ivan
2016-11-25
The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less
Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C.; Kagadis, George C.
2012-08-15
Purpose: GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Methods: Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., {sup 177}Lu, {sup 90}Y, and {sup 32}P, and (c) gamma emitting isotopes, e.g., {sup 111}In, {sup 131}I, {sup 125}I, and {sup 99m}Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. Results: GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data
Generation of XS library for the reflector of VVER reactor core using Monte Carlo code Serpent
NASA Astrophysics Data System (ADS)
Usheva, K. I.; Kuten, S. A.; Khruschinsky, A. A.; Babichev, L. F.
2017-01-01
A physical model of the radial and axial reflector of VVER-1200-like reactor core has been developed. Five types of radial reflector with different material composition exist for the VVER reactor core and 1D and 2D models were developed for all of them. Axial top and bottom reflectors are described by the 1D model. A two-group XS library for diffusion code DYN3D has been generated for all types of reflectors by using Serpent 2 Monte Carlo code. Power distribution in the reactor core calculated in DYN3D is flattened in the core central region to more extent in the 2D model of the radial reflector than in its 1D model.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Sutton, T.M.; Brown, F.B.
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Applications of the Monte Carlo Code Geant to Particle Beam Therapy
NASA Astrophysics Data System (ADS)
Szymanowski, H.; Fuchs, T.; Nill, S.; Wilkens, J. J.; Pflugfelder, D.; Oelfke, U.; Glinec, Y.; Faure, J.; Malka, V.
2006-04-01
We report on the use of the Monte Carlo simulation code GEANT for two different applications in the field of particle beam therapy. The first application relates to the planning of intensity-modulated proton therapy (IMPT) treatments. An important issue is thereby the accurate prediction of the dose while irradiating complex inhomogeneous patient geometries. We developed an improved method to account for tissue inhomogeneities in pencil beam algorithms. We show that GEANT3 can be successfully used to validate the new model before its integration in our treatment planning system. Another project concerns the investigation of the potential of high-energy particles produced by laser-plasma interactions for radiotherapy. GEANT4 simulations of the dosimetric properties of an experimental laser-accelerated electron beam were performed. They show that this technique may be very attractive for the development of new therapy beam modalities such as very-high energy (170 MeV) electrons.
Space applications of the MITS electron-photon Monte Carlo transport code system
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Giuseppe Palmiotti
2015-05-01
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator
Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein
2014-01-01
Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm2× 10 cm2 filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804
Verification of SMART Neutronics Design Methodology by the MCNAP Monte Carlo Code
Jong Sung Chung; Kyung Jin Shim; Chang Hyo Kim; Chungchan Lee; Sung Quun Zee
2000-11-12
SMART is a small advanced integral pressurized water reactor (PWR) of 330 MW(thermal) designed for both electricity generation and seawater desalinization. The CASMO-3/MASTER nuclear analysis system, a design-basis of Korean PWR plants, has been employed for the SMART core nuclear design and analysis because the fuel assembly (FA) characteristics and reactor operating conditions in temperature and pressure are similar to those of PWR plants. However, the SMART FAs are highly poisoned with more than 20 Al{sub 2}O{sub 3}-B{sub 4}C plus additional Gd{sub 2}O{sub 3}/UO{sub 2} BPRs each FA. The reactor is operated with control rods inserted. Therefore, the flux and power distribution may become more distorted than those of commercial PWR plants. In addition, SMART should produce power from room temperature to hot-power operating condition because it employs nuclear heating from room temperature. This demands reliable predictions of core criticality, shutdown margin, control rod worth, power distributions, and reactivity coefficients at both room temperature and hot operating condition, yet no such data are available to verify the CASMO-3/MASTER (hereafter MASTER) code system. In the absence of experimental verification data for the SMART neutronics design, the Monte Carlo depletion analysis program MCNAP is adopted as near-term alternatives for qualifying MASTER neutronics design calculations. The MCNAP is a personal computer-based continuous energy Monte Carlo neutronics analysis program written in C++ language. We established its qualification by presenting its prediction accuracy on measurements of Venus critical facilities and core neutronics analysis of a PWR plant in operation, and depletion characteristics of integral burnable absorber FAs of the current PWR. Here, we present a comparison of MASTER and MCNAP neutronics design calculations for SMART and establish the qualification of the MASTER system.
A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator.
Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein
2014-01-01
Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm(2)× 10 cm(2) filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model.
MCNP-PoliMi: a Monte-Carlo code for correlation measurements
NASA Astrophysics Data System (ADS)
Pozzi, Sara A.; Padovani, Enrico; Marseguerra, Marzio
2003-11-01
The Monte-Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle interactions in each history be described as closely as possible. The MCNP-PoliMi code has been developed from the standard MCNP code to simulate each neutron-nucleus interaction as closely as possible. In particular, neutron interaction and photon production are made correlated and correct neutron and photon fission multiplicities have been implemented. The code output consists in relevant information about each collision, for example the type of collision, the collision target, the energy deposited, and the position of the interaction. A post-processing code has also been developed and can be tailored to model specific detector characteristics. These features make MCNP-PoliMi a versatile tool to simulate particle interactions and detection processes. The application of the MCNP-PoliMi code to simulate neutron and gamma ray detection in a plastic scintillator is presented. PoliMi stands for Politecnico di Milano.
Cullen, D.E
2000-11-22
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann
2011-07-01
There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.
X-ray simulation with the Monte Carlo code PENELOPE. Application to Quality Control.
Pozuelo, F; Gallardo, S; Querol, A; Verdú, G; Ródenas, J
2012-01-01
A realistic knowledge of the energy spectrum is very important in Quality Control (QC) of X-ray tubes in order to reduce dose to patients. However, due to the implicit difficulties to measure the X-ray spectrum accurately, it is not normally obtained in routine QC. Instead, some parameters are measured and/or calculated. PENELOPE and MCNP5 codes, based on the Monte Carlo method, can be used as complementary tools to verify parameters measured in QC. These codes allow estimating Bremsstrahlung and characteristic lines from the anode taking into account specific characteristics of equipment. They have been applied to simulate an X-ray spectrum. Results are compared with theoretical IPEM 78 spectrum. A sensitivity analysis has been developed to estimate the influence on simulated spectra of important parameters used in simulation codes. With this analysis it has been obtained that the FORCE factor is the most important parameter in PENELOPE simulations. FORCE factor, which is a variance reduction method, improves the simulation but produces hard increases of computer time. The value of FORCE should be optimized so that a good agreement of simulated and theoretical spectra is reached, but with a reduction of computer time. Quality parameters such as Half Value Layer (HVL) can be obtained with the PENELOPE model developed, but FORCE takes such a high value that computer time is hardly increased. On the other hand, depth dose assessment can be achieved with acceptable results for small values of FORCE.
SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy
Rinaldi, I
2014-06-01
Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European
Assessment of MIRD data for internal dosimetry using the GATE Monte Carlo code.
Parach, Ali Asghar; Rajabi, Hossein; Askari, Mohammad Ali
2011-08-01
GATE/GEANT is a Monte Carlo code dedicated to nuclear medicine that allows calculation of the dose to organs of voxel phantoms. On the other hand, MIRD is a well-developed system for estimation of the dose to human organs. In this study, results obtained from GATE/GEANT using Snyder phantom are compared to published MIRD data. For this, the mathematical Snyder phantom was discretized and converted to a digital phantom of 100 × 200 × 360 voxels. The activity was considered uniformly distributed within kidneys, liver, lungs, pancreas, spleen, and adrenals. The GATE/GEANT Monte Carlo code was used to calculate the dose to the organs of the phantom from mono-energetic photons of 10, 15, 20, 30, 50, 100, 200, 500, and 1000 keV. The dose was converted into specific absorbed fraction (SAF) and the results were compared to the corresponding published MIRD data. On average, there was a good correlation (r (2)>0.99) between the two series of data. However, the GATE/GEANT data were on average -0.16 ± 6.22% lower than the corresponding MIRD data for self-absorption. Self-absorption in the lungs was considerably higher in the MIRD compared to the GATE/GEANT data, for photon energies of 10-20 keV. As for cross-irradiation to other organs, the GATE/GEANT data were on average +1.5 ± 8.1% higher than the MIRD data, for photon energies of 50-1000 keV. For photon energies of 10-30 keV, the relative difference was +7.5 ± 67%. It turned out that the agreement between the GATE/GEANT and the MIRD data depended upon absolute SAF values and photon energy. For 10-30 keV photons, where the absolute SAF values were small, the uncertainty was high and the effect of cross-section prominent, and there was no agreement between the GATE/GEANT results and the MIRD data. However, for photons of 50-1,000 keV, the bias was negligible and the agreement was acceptable.
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
VALDEZ, GREG D.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.
Sinha, A.; Patni, H.K.; Dixit, B.M.; Painuly, N.K.; Singh, N.
2016-01-01
Background: Most preclinical studies are carried out on mice. For internal dose assessment of a mouse, specific absorbed fraction (SAF) values play an important role. In most studies, SAF values are estimated using older standard human organ compositions and values for limited source target pairs. Objective: SAF values for monoenergetic photons of energies 15, 50, 100, 500, 1000 and 4000 keV were evaluated for the Digimouse voxel phantom incorporated in Monte Carlo code FLUKA. The organ sources considered in this study were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal, eye and brain. The considered target organs were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal and brain. Eye was considered as a target organ only for eye as a source organ. Organ compositions and densities were adopted from International Commission on Radiological Protection (ICRP) publication number 110. Results: Evaluated organ masses and SAF values are presented in tabular form. It is observed that SAF values decrease with increasing the source-to-target distance. The SAF value for self-irradiation decreases with increasing photon energy. The SAF values are also found to be dependent on the mass of target in such a way that higher values are obtained for lower masses. The effect of composition is highest in case of target organ lungs where mass and estimated SAF values are found to have larger differences. Conclusion: These SAF values are very important for absorbed dose calculation for various organs of a mouse. PMID:28144589
A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS
Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.
2013-02-15
We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
A Monte Carlo Code to Compute Energy Fluxes in Cometary Nuclei
NASA Astrophysics Data System (ADS)
Moreno, F.; Muñoz, O.; López-Moreno, J. J.; Molina, A.; Ortiz, J. L.
2002-04-01
A Monte Carlo model designed to compute both the input and output radiation fields from spherical-shell cometary atmospheres has been developed. The code is an improved version of that by H. Salo (1988, Icarus76, 253-269); it includes the computation of the full Stokes vector and can compute both the input fluxes impinging on the nucleus surface and the output radiation. This will have specific applications for the near-nucleus photometry, polarimetry, and imaging data collection planned in the near future from space probes. After carrying out some validation tests of the code, we consider here the effects of including the full 4×4 scattering matrix in the calculations of the radiative flux impinging on cometary nuclei. As input to the code we used realistic phase matrices derived by fitting the observed behavior of the linear polarization as a function of phase angle. The observed single scattering linear polarization phase curves of comets are fairly well represented by a mixture of magnesium-rich olivine particles and small carbonaceous particles. The input matrix of the code is thus given by the phase matrix for olivine as obtained in the laboratory plus a variable scattering fraction phase matrix for absorbing carbonaceous particles. These fractions are 3.5% for Comet Halley and 6% for Comet Hale-Bopp, the comet with the highest percentage of all those observed. The errors in the total input flux impinging on the nucleus surface caused by neglecting polarization are found to be within 10% for the full range of solar zenith angles. Additional tests on the resulting linear polarization of the light emerging from cometary nuclei in near-nucleus observation conditions at a variety of coma optical thicknesses show that the polarization phase curves do not experience any significant changes for optical thicknesses τ≳0.25 and Halley-like surface albedo, except near 90° phase angle.
A Parallel Monte Carlo Code for Simulating Collisional N-body Systems
NASA Astrophysics Data System (ADS)
Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.
2013-02-01
We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ~ 107 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 105 to 107. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within <~ 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 105, 128 for N = 106 and 256 for N = 107. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Lone, M. A.; Wong, P. Y.; Costen, Robert C.
1994-01-01
A baryon transport code (BRYNTRN) has previously been verified using available Monte Carlo results for a solar-flare spectrum as the reference. Excellent results were obtained, but the comparisons were limited to the available data on dose and dose equivalent for moderate penetration studies that involve minor contributions from secondary neutrons. To further verify the code, the secondary energy spectra of protons and neutrons are calculated using BRYNTRN and LAHET (Los Alamos High-Energy Transport code, which is a Monte Carlo code). These calculations are compared for three locations within a water slab exposed to the February 1956 solar-proton spectrum. Reasonable agreement was obtained when various considerations related to the calculational techniques and their limitations were taken into account. Although the Monte Carlo results are preliminary, it appears that the neutron albedo, which is not currently treated in BRYNTRN, might be a cause for the large discrepancy seen at small penetration depths. It also appears that the nonelastic neutron production cross sections in BRYNTRN may underestimate the number of neutrons produced in proton collisions with energies below 200 MeV. The notion that the poor energy resolution in BRYNTRN may cause a large truncation error in neutron elastic scattering requires further study.
NASA Astrophysics Data System (ADS)
KolláR, D.; Michel, R.; Masarik, J.
2006-03-01
A purely physical model based on a Monte Carlo simulation of galactic cosmic ray (GCR) particle interaction with meteoroids is used to investigate neutron interactions down to thermal energies. Experimental and/or evaluated excitation functions are used to calculate neutron capture production rates as a function of the size of the meteoroid and the depth below its surface. Presented are the depth profiles of cosmogenic radionuclides 36Cl, 41Ca, 60Co, 59Ni, and 129I for meteoroid radii from 10 cm up to 500 cm and a 2π irradiation. Effects of bulk chemical composition on n-capture processes are studied and discussed for various chondritic and lunar compositions. The mean GCR particle flux over the last 300 ka was determined from the comparison of simulations with measured 41Ca activities in the Apollo 15 drill core. The determined value significantly differs from that obtained using equivalent models of spallation residue production.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Valentine, T.E.; Rugama, Y. Munoz-Cobos, J.; Perez, R.
2000-10-23
The design of reactivity monitoring systems for accelerator-driven systems must be investigated to ensure that such systems remain subcritical during operation. The Monte Carlo codes LAHET and MCNP-DSP were combined together to facilitate the design of reactivity monitoring systems. The coupling of LAHET and MCNP-DSP provides a tool that can be used to simulate a variety of subcritical measurements such as the pulsed neutron, Rossi-{alpha}, or noise analysis measurements.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer
Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.
Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario
2016-11-24
Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of (41)Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility.
Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J
2004-09-01
We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.
Chiavassa, S; Aubineau-Lanièce, I; Bitar, A; Lisbona, A; Barbet, J; Franck, D; Jourdain, J R; Bardiès, M
2006-02-07
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Off-axis neutron study from a uniform scanning proton beam using Monte Carlo code FLUKA
NASA Astrophysics Data System (ADS)
Islam, Mohammad Rafiqul
The production of secondary neutrons is an undesirable byproduct of proton therapy. It is important to quantify the contribution from secondary neutrons to patient dose received outside the treatment volume. The purpose of this study is to investigate the off-axis dose equivalent from secondary neutrons using the Monte Carlo radiation transport code FLUKA. The study is done using a simplified version of the beam delivery system used at ProCure Proton Therapy Center, Oklahoma City, OK. In this study, a particular set of treatment parameters were set to study the dose equivalent outside the treatment volume inside a phantom and in air at various depths and angles with respect to the primary beam axis. Three different proton beams with maximum energies of 78 MeV, 162 MeV and 226 MeV and 4 cm modulation width, a 5 cm diameter brass aperture, and a small snout located 38 cm from isocenter were used for the study. The FLUKA calculated secondary neutron dose equivalent to absorbed proton dose, Hn/Dp, decreased with distance from beam isocenter. The Hn/Dp ranged from 0.11 +/- 0.01 mSv/Gy for a 78 MeV proton beam to 111.01 +/- 1.99 mSv/Gy for a 226 MeV proton beam. Overall, Hn/D p was observed to be higher in air than in the phantom, indicating the predominance of external neutrons produced in the nozzle rather than inside the body.
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
PADOVANI, ENRICO
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.
NASA Astrophysics Data System (ADS)
SU, J.; Sagdeev, R.; Usikov, D.; Chin, G.; Boyer, L.; Livengood, T. A.; McClanahan, T. P.; Murray, J.; Starr, R. D.
2013-12-01
Introduction: The leakage flux of lunar neutrons produced by precipitation of galactic cosmic ray (GCR) particles in the upper layer of the lunar regolith and measured by orbital instruments such as the Lunar Exploration Neutron Detector (LEND) is investigated by Monte Carlo simulation. Previous Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the elemental composition of lunar soil [1-6] and its effect on the leakage neutron flux. We investigate effects on the emergent flux that depend on the physical distribution of hydrogen within the regolith. We use the software package GEANT4 [7] to calculate neutron production from spallation by GCR particles [8,9] in the lunar soil. Multiple layers of differing hydrogen/water at different depths in the lunar regolith model are introduced to examine enhancement or suppression of leakage neutron flux. We find that the majority of leakage thermal and epithermal neutrons are produced in 25 cm to 75 cm deep from the lunar surface. Neutrons produced in the shallow top layer retain more of their original energy due to fewer scattering interactions and escape from the lunar surface mostly as fast neutrons. This provides a diagnostic tool in interpreting leakage neutron flux enhancement or suppression due to hydrogen concentration distribution in lunar regolith. We also find that the emitting angular distribution of thermal and epithermal leakage neutrons can be described by cos3/2(theta) where the fast neutrons emitting angular distribution is cos(theta). The energy sensitivity and angular response of the LEND detectors SETN and CSETN are investigated using the leakage neutron spectrum from GEANT4 simulations. A simplified LRO model is used to benchmark MCNPX[10] and GEANT4 on CSETN absolute count rate corresponding to neutron flux from bombardment of 120MV solar potential GCR particles on FAN lunar soil. We are able to interpret the count rates of SETN and
NASA Astrophysics Data System (ADS)
Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.
2007-07-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
COOL: A code for dynamic Monte Carlo simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2011-02-01
COOL is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in a harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. Program summaryProgram title: COOL Catalogue identifier: AEHJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 111 674 No. of bytes in distributed program, including test data, etc.: 18 618 045 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare interparticle collisions are considered with an acceptance/rejection mechanism, that is by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation
DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries
NASA Astrophysics Data System (ADS)
Kargaran, H.; Minuchehr, A.; Zolfaghari, A.
2016-07-01
In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.
A user-friendly, graphical interface for the Monte Carlo neutron optics code MCLIB
Thelliez, T.; Daemen, L.; Hjelm, R.P.; Seeger, P.A.
1995-12-01
The authors describe a prototype of a new user interface for the Monte Carlo neutron optics simulation program MCLIB. At this point in its development the interface allows the user to define an instrument as a set of predefined instrument elements. The user can specify the intrinsic parameters of each element, its position and orientation. The interface then writes output to the MCLIB package and starts the simulation. The present prototype is an early development stage of a comprehensive Monte Carlo simulations package that will serve as a tool for the design, optimization and assessment of performance of new neutron scattering instruments. It will be an important tool for understanding the efficacy of new source designs in meeting the needs of these instruments.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code.
Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza
2013-07-01
The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique.
1991-03-01
Monti Captain# USAF AFIT.’GNE/F•P/91M-6 (LO IA Approved for public release; distribution unlimited AFIT/IGNE/ENP/91M-6 HIGH ALTITUDE NEUTRAL... distribution unlimited Preface The purpose of this study was to perform Monte Carlo simulations of neutral particle transport with primary and secondary...21 4. Spatial Cell Geometry for Co-Altitude Detectors .................... .................. 44 5. MCNP vs. SMAUG Neutron Fluence at Source Co
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F
2002-06-01
A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Basic physical and chemical information needed for development of Monte Carlo codes
Inokuti, M.
1993-08-01
It is important to view track structure analysis as an application of a branch of theoretical physics (i.e., statistical physics and physical kinetics in the language of the Landau school). Monte Carlo methods and transport equation methods represent two major approaches. In either approach, it is of paramount importance to use as input the cross section data that best represent the elementary microscopic processes. Transport analysis based on unrealistic input data must be viewed with caution, because results can be misleading. Work toward establishing the cross section data, which demands a wide scope of knowledge and expertise, is being carried out through extensive international collaborations. In track structure analysis for radiation biology, the need for cross sections for the interactions of electrons with DNA and neighboring protein molecules seems to be especially urgent. Finally, it is important to interpret results of Monte Carlo calculations fully and adequately. To this end, workers should document input data as thoroughly as possible and report their results in detail in many ways. Workers in analytic transport theory are then likely to contribute to the interpretation of the results.
Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A
2012-07-01
Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is <6 %. For voxelised phantom, the study of the voxel size highlighted that the shape of the curve f(1)(z) obtained with MCNPX for <1 µm voxel size presents a significant difference with the shape of non-voxelised geometry. When using Geant4, little differences are observed whatever the voxel size is. Below 1 µm, the use of Geant4 is required. However, the calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code
NASA Astrophysics Data System (ADS)
Askri, Boubaker
2015-10-01
Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.
Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F
2002-02-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.
NASA Astrophysics Data System (ADS)
Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun
2012-05-01
Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan; Leal, Luiz C.
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependent cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan; ...
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
Ghoos, K.; Dekeyser, W.; Samaey, G.; Börner, P.; Baelmans, M.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.
Walsh, J. A.; Palmer, T. S.; Urbatsch, T. J.
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Oehlke, Elisabeth; Mostacci, Domiziano; Schaffer, Paul; Trinczek, Michael; Hoehr, Cornelia
2016-01-01
The Monte Carlo code FLUKA is used to simulate the production of a number of positron emitting radionuclides, 18F, 13N, 94Tc, 44Sc, 68Ga, 86Y, 89Zr, 52Mn, 61Cu and 55Co, on a small medical cyclotron with a proton beam energy of 13 MeV. Experimental data collected at the TR13 cyclotron at TRIUMF agree within a factor of 0.6 ± 0.4 with the directly simulated data, except for the production of 55Co, where the simulation underestimates the experiment by a factor of 3.4 ± 0.4. The experimental data also agree within a factor of 0.8 ± 0.6 with the convolution of simulated proton fluence and cross sections from literature. Overall, this confirms the applicability of FLUKA to simulate radionuclide production at 13 MeV proton beam energy.
Dos Santos, M; Clairand, I; Gruel, G; Barquinero, J F; Incerti, S; Villagrasa, C
2014-10-01
The purpose of this work is to evaluate the influence of the chromatin condensation on the number of direct double-strand break (DSB) damages induced by ions. Two geometries of chromosome territories containing either condensed or decondensed chromatin were implemented as biological targets in the Geant4 Monte Carlo simulation code and proton and alpha irradiation was simulated using the Geant4-DNA processes. A DBSCAN algorithm was used in order to detect energy deposition clusters that could give rise to single-strand breaks or DSBs on the DNA molecule. The results of this study show an increase in the number and complexity of DNA DSBs in condensed chromatin when compared with decondensed chromatin.
Evaluation of a 50-MV Photon Therapy Beam from a Racetrack Microtron Using MCNP4B Monte Carlo Code
NASA Astrophysics Data System (ADS)
Gudowska, I.; Sorcini, B.; Svensson, R.
High energy photon therapy beam from the 50 MV racetrack microtron has been evaluated using the Monte Carlo code MCNP4B. The spatial and energy distribution of photons, radial and depth dose distributions in the phantom are calculated for the stationary and scanned photon beams from different targets. The calculated dose distributions are compared to the experimental data using a silicon diode detector. Measured and calculated depth-dose distributions are in fairly good agreement, within 2-3% for the positions in the range 2-30 cm in the phantom, whereas the larger discrepancies up to 10% are observed in the dose build-up region. For the stationary beams the differences in the calculated and measured radial dose distributions axe about 2-10%.
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
PUVA: A Monte Carlo code for intra-articular PUVA treatment of arthritis
Descalle, M.A.; Laing, T.J.; Martin, W.R.
1996-12-31
Current rheumatoid arthritis treatments are only partially successful. Intra-articular psoralen-ultraviolet light (PUVA) phototherapy appears to be a new and valid alternative. Ultraviolet laser light (UVA) delivered in the knee joint through a fiber optic is used in combination with 8-methoxypsoralen (8-MOP), a light-sensitive chemical administered orally. A few hours after ingestion, the psoralen has diffused in all body cells. Once activated by UVA light, it binds to biological molecules, inhabiting cell division and ultimately causing local control of the arthritis. The magnitude of the response is proportional to the number of photoproducts delivered to tissues (i.e., the number of absorbed photons): the PUVA treatment will only be effective if a sufficient and relatively uniform dose is delivered to the diseased synovial tissues, while sparing other tissues such as cartilage. An application is being developed, based on analog Monte Carlo methods, to predict photon densities in tissues and the minimum number of intra-articular catheter positions necessary to ensure proper treatment of the diseased zone. Other interesting aspects of the problem deal with the compexity of the joint geometry, the physics of light scattering in tissues (a relatively new field of research that is not fully understood because of the variety of tissues and tissue components), and, finally, the need to include optic laws (reflection and refraction) at interfaces.
Monte Carlo simulation of a multi-leaf collimator design for telecobalt machine using BEAMnrc code.
Ayyangar, Komanduri M; Kumar, M Dinesh; Narayan, Pradush; Jesuraj, Fenedit; Raju, M R
2010-01-01
This investigation aims to design a practical multi-leaf collimator (MLC) system for the cobalt teletherapy machine and check its radiation properties using the Monte Carlo (MC) method. The cobalt machine was modeled using the BEAMnrc Omega-Beam MC system, which could be freely downloaded from the website of the National Research Council (NRC), Canada. Comparison with standard depth dose data tables and the theoretically modeled beam showed good agreement within 2%. An MLC design with low melting point alloy (LMPA) was tested for leakage properties of leaves. The LMPA leaves with a width of 7 mm and height of 6 cm, with tongue and groove of size 2 mm wide by 4 cm height, produced only 4% extra leakage compared to 10 cm height tungsten leaves. With finite (60)Co source size, the interleaf leakage was insignificant. This analysis helped to design a prototype MLC as an accessory mount on a cobalt machine. The complete details of the simulation process and analysis of results are discussed.
Evans, T.E.; Leonard, A.W.; West, W.P.; Finkenthal, D.F.; Fenstermacher, M.E.; Porter, G.D.
1998-08-01
Experimentally measured carbon line emissions and total radiated power distributions from the DIII-D divertor and Scrape-Off Layer (SOL) are compared to those calculated with the Monte Carlo Impurity (MCI) model. A UEDGE background plasma is used in MCI with the Roth and Garcia-Rosales (RG-R) chemical sputtering model and/or one of six physical sputtering models. While results from these simulations do not reproduce all of the features seen in the experimentally measured radiation patterns, the total radiated power calculated in MCI is in relatively good agreement with that measured by the DIII-D bolometric system when the Smith78 physical sputtering model is coupled to RG-R chemical sputtering in an unaltered UEDGE plasma. Alternatively, MCI simulations done with UEDGE background ion temperatures along the divertor target plates adjusted to better match those measured in the experiment resulted in three physical sputtering models which when coupled to the RG-R model gave a total radiated power that was within 10% of measured value.
Liu, T; Lin, H; Xu, X; Stabin, M
2015-06-15
Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)
Commissioning of 6 MV medical linac for dynamic MLC-based IMRT on Monte Carlo code GEANT4.
Okamoto, Hiroyuki; Fujita, Yukio; Sakama, Kyoko; Saitoh, Hidetoshi; Kanai, Tatsuaki; Itami, Jun; Kohno, Toshiyuki
2014-07-01
Monte Carlo simulation is the most accurate tool for calculating dose distributions. In particular, the Electron Gamma shower computer code has been widely used for multi-purpose research in radiotherapy, but Monte Carlo GEANT4 (GEometry ANd Tracking) is rare for radiotherapy with photon beams and needs to be verified further under various irradiation conditions, particularly multi-leaf collimator-based intensity-modulated radiation therapy (MLC-based IMRT). In this study, GEANT4 was used for modeling of a 6 MV linac for dynamic MLC-based IMRT. To verify the modeling of our linac, we compared the calculated data with the measured depth-dose for a 10 × 10 cm(2) field and the measured dose profile for a 35 × 35 cm(2) field. Moreover, 120 MLCs were modeled on the GEANT4. Five tests of MLC modeling were performed: (I) MLC transmission, (II) MLC transmission profile including intra- and inter-leaf leakage, (III) tongue-and-groove leakage, (IV) a simple field with different field sizes by use of MLC and (V) a dynamic MLC-based IMRT field. For all tests, the calculations were compared with measurements of an ionization chamber and radiographic film. The calculations agreed with the measurements: MLC transmissions by calculations and measurements were 1.76 ± 0.01 and 1.87 ± 0.01 %, respectively. In gamma evaluation method (3 %/3 mm), the pass rates of the (IV) and (V) tests were 98.5 and 97.0 %, respectively. Furthermore, tongue-and-groove leakage could be calculated by GEANT4, and it agreed with the film measurements. The procedure of commissioning of dynamic MLC-based IMRT for GEANT4 is proposed in this study.
NASA Astrophysics Data System (ADS)
Gudmundsson, J. T.; Lieberman, M. A.; Wang, Ying; Verboncoeur, J. P.
2009-10-01
The oopd1 particle-in-cell Monte Carlo (PIC-MC) code is used to simulate a capacitively coupled discharge in oxygen. oopd1 is a one-dimensional object-oriented PIC-MC code [1] in which the model system has one spatial dimension and three velocity components. It contains models for planar, cylindrical, and spherical geometries and replaces the XPDx1 series [2], which is not object-oriented. The revised oxygen model includes, in addition to electrons, the oxygen molecule in ground state, the oxygen atom in ground state, the negative ion O^-, and the positive ions O^+ and O2^+. The cross sections for the collisions among the oxygen species have been significantly revised from earlier work using the xpdp1 code [3]. Here we explore the electron energy distribution function (EEDF), the ion energy distribution function (IEDF) and the density profiles for various pressures and driving frequencies. In particular we investigate the influence of the O^+ ion on the IEDF, we explore the influence of multiple driving frequencies, and we do comparisons to the previous xpdx1 codes. [1] J. P. Verboncoeur, A. B. Langdon, and N. T. Gladd, Comp. Phys. Comm. 87 (1995) 199 [2] J. P. Verboncoeur, M. V. Alves, V. Vahedi, and C. K. Birdsall, J. Comp. Physics 104 (1993) 321 [2] V. Vahedi and M. Surendra, Comp. Phys. Comm. 87 (1995) 179
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE.
NASA Astrophysics Data System (ADS)
Pedrocchi, Fabio L.; Bonesteel, N. E.; DiVincenzo, David P.
2015-09-01
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana bound states (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in one-dimensional (1D) structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e., the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture.
NASA Astrophysics Data System (ADS)
Palomba, M.; D'Erasmo, G.; Pantaleo, A.
2003-02-01
The CSSE code, a GEANT3-based Monte Carlo simulation program, has been developed in the framework of the EXPLODET project (Nucl. Instr. and Meth. A 422 (1999) 918) with the aim to simulate experimental set-ups employed in Thermal Neutron Analysis (TNA) for the landmines detection. Such a simulation code appears to be useful for studying the background in the γ-ray spectra obtained with this technique, especially in the region where one expects to find the explosive signature (the γ-ray peak at 10.83 MeV coming from neutron capture by nitrogen). The main features of the CSSE code are introduced and original innovations emphasized. Among the latter, an algorithm simulating the time correlation between primary particles, according with their time distributions is presented. Such a correlation is not usually achievable within standard GEANT-based codes and allows to reproduce some important phenomena, as the pulse pile-up inside the NaI(Tl) γ-ray detector employed, producing a more realistic detector response simulation. CSSE has been successfully tested by reproducing a real nuclear sensor prototype assembled at the Physics Department of Bari University.
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries.
NASA Astrophysics Data System (ADS)
Curreli, Davide; Lindquist, Kyle; Ruzic, David N.
2013-10-01
Techniques based on Monte Carlo Binary Collision Approximation (BCA) are widely used for the evaluation of particle interactions with matter, but rarely coupled with a consistent kinetic plasma solver like a Particle-in-Cell. The TRIM code [Eckstein; Biersack and Haggmark, 1980] and its version including dynamic-composition TRIDYN [Moller and Eckstein, 1984] are two popular implementations of BCA, where single-particle projectiles interact with a target of amorphous material according to the classical Carbon-Krypton interaction potential. The effect of surface roughness can be included as well, thanks to the Fractal-TRIM method [Ruzic and Chiu, 1989]. In the present study we couple BCA codes with Particles-in-Cells. The Lagrangian treatment of particle motion usually implemented in PiC codes suggests a natural coupling of PiC's with BCA's, even if a number of caveats has to be taken into account, related to the discrete nature of computational particles, to the difference between the two approaches and most important to the multiple spatial and temporal scales involved. The break down of BCA at low energies (unless the projectiles are channeling through an oriented crystal layer [Hobler and Betz, 2001]) has been supplemented by Yamamura's semi-empirical relations.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Application des codes de Monte Carlo à la radiothérapie par rayonnement à faible TEL
NASA Astrophysics Data System (ADS)
Marcié, S.
1998-04-01
In radiation therapy, there is low LET rays: photons of 60Co, photons and electrons to 4 at 25 MV created in a linac, photons 137Cs, of 192Ir and of 125I. To know the most exactly possible the dose to the tissu by this rays, software and measurements are used. With the development of the power and the capacity of computers, the application of Monte Carlo codes expand to the radiation therapy which have permitted to better determine effects of rays and spectra, to explicit parameters used in dosimetric calculation, to verify algorithms , to study measuremtents systems and phantoms, to calculate the dose in inaccessible points and to consider the utilization of new radionuclides. En Radiothérapie, il existe une variété, de rayonnements ? faible TLE : photons du cobalt 60, photons et ,électron de 4 à? 25 MV générés dans des accélérateurs linéaires, photons du césium 137, de l'iridium 192 et de l'iode 125. Pour connatre le plus exactement possible la dose délivrée aux tissus par ces rayonnements, des logiciels sont utilisés ainsi que des instruments de mesures. Avec le développement de la puissance et de la capacité, des calculateurs, l'application des codes de Monte Carlo s'est ,étendue ? la Radiothérapie ce qui a permis de mieux cerner les effets des rayonnements, déterminer les spectres, préciser les valeurs des paramètres utilisés dans les calculs dosimétriques, vérifier les algorithmes, ,étudier les systèmes de mesures et les fantomes utilisés, calculer la dose en des points inaccessibles ?à la mesure et envisager l'utilisation de nouveaux radio,éléments.
Performance of the improved version of monte Carlo code A3MCNP for large-scale shielding problems.
Omura, M; Miyake, Y; Hasegawa, T; Ueki, K; Sato, O; Haghighat, A; Sjoden, G E
2005-01-01
A3MCNP (Automatic Adjoint Accelerated MCNP) is a revised version of the MCNP Monte Carlo code, which automatically prepares variance reduction parameters for the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using a deterministic 'importance' (or adjoint) function, CADIS performs source and transport biasing within the weight-window technique. The current version of A3MCNP uses the three-dimensional (3-D) Sn transport TORT code to determine a 3-D importance function distribution. Based on simulation of several real-life problems, it is demonstrated that A3MCNP provides precise calculation results with a remarkably short computation time by using the proper and objective variance reduction parameters. However, since the first version of A3MCNP provided only a point source configuration option for large-scale shielding problems, such as spent-fuel transport casks, a large amount of memory may be necessary to store enough points to properly represent the source. Hence, we have developed an improved version of A3MCNP (referred to as A3MCNPV) which has a volumetric source configuration option. This paper describes the successful use of A3MCNPV for a concrete cask neutron and gamma-ray shielding problem, and a PWR dosimetry problem.
NASA Astrophysics Data System (ADS)
Lee, Y.-K.; Brun, E.
2014-04-01
The Sodium-cooled fast neutron reactor ASTRID is currently under design and development in France. Traditional ECCO/ERANOS fast reactor code system used for ASTRID core design calculations relies on multi-group JEFF-3.1.1 data library. To gauge the use of ENDF/B-VII.0 and JEFF-3.1.1 nuclear data libraries in the fast reactor applications, two recent OECD/NEA computational benchmarks specified by Argonne National Laboratory were calculated. Using the continuous-energy TRIPOLI-4 Monte Carlo transport code, both ABR-1000 MWth MOX core and metallic (U-Pu) core were investigated. Under two different fast neutron spectra and two data libraries, ENDF/B-VII.0 and JEFF-3.1.1, reactivity impact studies were performed. Using JEFF-3.1.1 library under the BOEC (Beginning of equilibrium cycle) condition, high reactivity effects of 808 ± 17 pcm and 1208 ± 17 pcm were observed for ABR-1000 MOX core and metallic core respectively. To analyze the causes of these differences in reactivity, several TRIPOLI-4 runs using mixed data libraries feature allow us to identify the nuclides and the nuclear data accounting for the major part of the observed reactivity discrepancies.
Giantsoudi, D; Schuemann, J; Dowdell, S; Paganetti, H; Jia, X; Jiang, S
2014-06-15
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Monte Carlo Predictions of Prompt Fission Neutrons and Photons: a Code Comparison
NASA Astrophysics Data System (ADS)
Talou, P.; Kawano, T.; Stetcu, I.; Vogt, R.; Randrup, J.
2014-04-01
This paper reports on initial comparisons between the LANL CGMF and LBNL/LLNL FREYA codes, which both aim at computing prompt fission neutrons and gammas. While the methodologies used in both codes are somewhat similar, the detailed implementations and physical assumptions are different. We are investigating how some of these differences impact predictions.
NASA Astrophysics Data System (ADS)
Kahraman, A.; Kaya, S.; Jaksic, A.; Yilmaz, E.
2015-05-01
Radiation-sensing Field Effect Transistors (RadFETs or MOSFET dosimeters) with SiO2 gate dielectric have found applications in space, radiotherapy clinics, and high-energy physics laboratories. More sensitive RadFETs, which require modifications in device design, including gate dielectric, are being considered for personal dosimetry applications. This paper presents results of a detailed study of the RadFET energy response simulated with PENELOPE Monte Carlo code. Alternative materials to SiO2 were investigated to develop high-efficiency new radiation sensors. Namely, in addition to SiO2, Al2O3 and HfO2 were simulated as gate material and deposited energy amounts in these layers were determined for photon irradiation with energies between 20 keV and 5 MeV. The simulations were performed for capped and uncapped configurations of devices irradiated by point and extended sources, the surface area of which is the same with that of the RadFETs. Energy distributions of transmitted and backscattered photons were estimated using impact detectors to provide information about particle fluxes within the geometrical structures. The absorbed energy values in the RadFETs material zones were recorded. For photons with low and medium energies, the physical processes that affect the absorbed energy values in different gate materials are discussed on the basis of modelling results. The results show that HfO2 is the most promising of the simulated gate materials.
Three-dimensional cellular dosimetry of I-131 mIBG in neuroblastoma with EGS4 Monte Carlo code
Gouriou, J.; Ricard, M.; Lumbroso, J.; Aubert, B. |
1995-05-01
The adequate distribution of radiation dose to tumor cells is the most important factor for the outcome of internal (metabolic) radiotherapy. This study investigates the dosimetry of I-131 meta-iodobenzyl-guanidine at the cellular level in neuroblastoma. We developed a program based on the EGS4 Monte Carlo code allowing the computation of basic dosimetric parameters such as absorbed and cumulated fractions, scaled dose point kernels and dose rates, especially for radionuclides with therapeutic potential. It can be applied to various types of 3-D radionuclide tumor distributions. Geometrical parameters and mIBG uptake in xenografted tumors (nude mice, SK-N-SH) were obtained from micro-autoradiographies and SIMS microscopy images. The tumor could be simulated by a spheroid (500 {mu}m in radius) made up of spherical cells (9 {mu}m in radius) with a 1 {mu}m cytoplasm. Among this cell population, only 3% bound mIBG with local maximal rates of up to 16%. The radiation doses were calculated for I-131, since this radionuclide is the most widely used for labelling mIBG for a therapeutic potential. It can be applied to various types of 3-D radionuclide tumor distributions. Geometrical parameters and mIBG uptake in xenografted tumors (nude mice, SK-N-SH) were obtained from micro-autoradiographies and SIMS microscopy images.
Cheung, Joel Y.C.; Yu, K.N.
2006-01-15
In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the {sup 60}Co source and radiated isotropically in a solid angle of 4{pi}. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the {sup 60}Co source and the beam channel also contribute to the output spectra.
NASA Astrophysics Data System (ADS)
Ibey, Bennett L.; Lee, Seungjoon; Ericson, M. Nance; Wilson, Mark A.; Cote, Gerard L.
2004-06-01
A Multi-Layer Monte Carlo (MLMC) model was developed to predict the results of in vivo blood perfusion and oxygenation measurement of transplanted organs as measured by an indwelling optical sensor. A sensor has been developed which uses three-source excitation in the red and infrared ranges (660, 810, 940 nm). In vitro data was taken using this sensor by changing the oxygenation state of whole blood and passing it through a single-tube pump system wrapped in bovine liver tissue. The collected data showed that the red signal increased as blood oxygenation increased and infrared signal decreased. The center wavelength of 810 nanometers was shown to be quite indifferent to blood oxygenation change. A model was developed using MLMC code that sampled the wavelength range from 600-1000 nanometers every 6 nanometers. Using scattering and absorption data for blood and liver tissue within this wavelength range, a five-layer model was developed (tissue, clear tubing, blood, clear tubing, tissue). The theoretical data generated from this model was compared to the in vitro data and showed good correlation with changing blood oxygenation.
Absorbed dose estimations of 131I for critical organs using the GEANT4 Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Ziaur, Rahman; Shakeel, ur Rehman; Waheed, Arshed; Nasir, M. Mirza; Abdul, Rashid; Jahan, Zeb
2012-11-01
The aim of this study is to compare the absorbed doses of critical organs of 131I using the MIRD (Medical Internal Radiation Dose) with the corresponding predictions made by GEANT4 simulations. S-values (mean absorbed dose rate per unit activity) and energy deposition per decay for critical organs of 131I for various ages, using standard cylindrical phantom comprising water and ICRP soft-tissue material, have also been estimated. In this study the effect of volume reduction of thyroid, during radiation therapy, on the calculation of absorbed dose is also being estimated using GEANT4. Photon specific energy deposition in the other organs of the neck, due to 131I decay in the thyroid organ, has also been estimated. The maximum relative difference of MIRD with the GEANT4 simulated results is 5.64% for an adult's critical organs of 131I. Excellent agreement was found between the results of water and ICRP soft tissue using the cylindrical model. S-values are tabulated for critical organs of 131I, using 1, 5, 10, 15 and 18 years (adults) individuals. S-values for a cylindrical thyroid of different sizes, having 3.07% relative differences of GEANT4 with Siegel & Stabin results. Comparison of the experimentally measured values at 0.5 and 1 m away from neck of the ionization chamber with GEANT4 based Monte Carlo simulations results show good agreement. This study shows that GEANT4 code is an important tool for the internal dosimetry calculations.
NASA Astrophysics Data System (ADS)
Mohanty, P. K.; Dugad, S. R.; Gupta, S. K.
2012-04-01
A detailed description of a compact Monte Carlo simulation code "G3sim" for studying the performance of a plastic scintillator detector with wavelength shifter (WLS) fiber readout is presented. G3sim was developed for optimizing the design of new scintillator detectors used in the GRAPES-3 extensive air shower experiment. Propagation of the blue photons produced by the passage of relativistic charged particles in the scintillator is treated by incorporating the absorption, total internal, and diffuse reflections. Capture of blue photons by the WLS fibers and subsequent re-emission of longer wavelength green photons is appropriately treated. The trapping and propagation of green photons inside the WLS fiber is treated using the laws of optics for meridional and skew rays. Propagation time of each photon is taken into account for the generation of the electrical signal at the photomultiplier. A comparison of the results from G3sim with the performance of a prototype scintillator detector showed an excellent agreement between the simulated and measured properties. The simulation results can be parametrized in terms of exponential functions providing a deeper insight into the functioning of these versatile detectors. G3sim can be used to aid the design and optimize the performance of scintillator detectors prior to actual fabrication that may result in a considerable saving of time, labor, and money spent.
Monte-Carlo simulation of a coded aperture SPECT apparatus using uniformly redundant arrays
NASA Astrophysics Data System (ADS)
Gemmill, Paul E.; Chaney, Roy C.; Fenyves, Ervin J.
1995-09-01
Coded apertures are used in tomographic imaging systems to improve the signal-to-noise ratio (SNR) of the apparatus with a larger aperture transmissions area while maintaining the spatial resolution of the single pinhole. Coded apertures developed from uniformly redundant arrays (URA) have an aperture transmission area of slightly over one half of the total aperture. Computer simulations show that the spatial resolution of a SPECT apparatus using a URA generated coded aperture compared favorably with theoretical expectations and has a SNR that is approximately 3.5 to 4 times that of a single pinhole camera for a variety of cases.
NASA Astrophysics Data System (ADS)
Fracchiolla, F.; Lorentini, S.; Widesott, L.; Schwarz, M.
2015-11-01
We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ -Passing Rate (PR) > 95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ -PR > 93%) than the one coming from TPS (γ -PR < 88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.
Fracchiolla, F; Lorentini, S; Widesott, L; Schwarz, M
2015-11-07
We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ-Passing Rate (PR) > 95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ-PR > 93%) than the one coming from TPS (γ-PR < 88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.
Development of Momentum Conserving Monte Carlo Simulation Code for ECCD Study in Helical Plasmas
NASA Astrophysics Data System (ADS)
Murakami, S.; Hasegawa, S.; Moriya, Y.
2015-03-01
Parallel momentum conserving collision model is developed for GNET code, in which a linearized drift kinetic equation is solved in the five dimensional phase-space to study the electron cyclotron current drive (ECCD) in helical plasmas. In order to conserve the parallel momentum, we introduce a field particle collision term in addition to the test particle collision term. Two types of the field particle collision term are considered. One is the high speed limit model, where the momentum conserving term does not depend on the velocity of the background plasma and can be expressed in a simple form. The other is the velocity dependent model, which is derived from the Fokker-Planck collision term directly. In the velocity dependent model the field particle operator can be expressed using Legendre polynominals and, introducing the Rosenbluth potential, we derive the field particle term for each Legendre polynominals. In the GNET code, we introduce an iterative process to implement the momentum conserving collision operator. The high speed limit model is applied to the ECCD simulation of the heliotron-J plasma. The simulation results show a good conservation of the momentum with the iterative scheme.
Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division
2009-06-09
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the
Thermal neutron response of a boron-coated GEM detector via GEANT4 Monte Carlo code.
Jamil, M; Rhee, J T; Kim, H G; Ahmad, Farzana; Jeon, Y J
2014-10-22
In this work, we report the design configuration and the performance of the hybrid Gas Electron Multiplier (GEM) detector. In order to make the detector sensitive to thermal neutrons, the forward electrode of the GEM has been coated with the enriched boron-10 material, which works as a neutron converter. A total of 5×5cm(2) configuration of GEM has been used for thermal neutron studies. The response of the detector has been estimated via using GEANT4 MC code with two different physics lists. Using the QGSP_BIC_HP physics list, the neutron detection efficiency was determined to be about 3%, while with QGSP_BERT_HP physics list the efficiency was around 2.5%, at the incident thermal neutron energies of 25meV. The higher response of the detector proves that GEM-coated with boron converter improves the efficiency for thermal neutrons detection.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min; Yeom, Yeon; Kim, Chan; Brown, Justin; Bolch, Wesley
2017-04-04
A new function to treat tetrahedral-mesh geometry was implemented in the Particle and Heavy Ion Transport code Systems (PHITS). To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code
NASA Astrophysics Data System (ADS)
Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.
2013-12-01
The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness,
Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study
NASA Astrophysics Data System (ADS)
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-01
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
NASA Astrophysics Data System (ADS)
Shinohara, Kouji; Suzuki, Yasuhiro; Kim, Junghee; Kim, Jun Young; Jeon, Young Mu; Bierwage, Andreas; Rhee, Tongnyeol
2016-11-01
The fast ion dynamics and the associated heat load on the plasma facing components in the KSTAR tokamak were investigated with the orbit following Monte-Carlo (OFMC) code in several magnetic field configurations and realistic wall geometry. In particular, attention was paid to the effect of resonant magnetic perturbation (RMP) fields. Both the vacuum field approximation as well as the self-consistent field that includes the response of a stationary plasma were considered. In both cases, the magnetic perturbation (MP) is dominated by the toroidal mode number n = 1, but otherwise its structure is strongly affected by the plasma response. The loss of fast ions increased significantly when the MP field was applied. Most loss particles hit the poloidal limiter structure around the outer mid-plane on the low field side, but the distribution of heat loads across the three limiters varied with the form of the MP. Short-timescale loss of supposedly well-confined co-passing fast ions was also observed. These losses started within a few poloidal transits after the fast ion was born deep inside the plasma on the high-field side of the magnetic axis. In the configuration studied, these losses are facilitated by the combination of two factors: (i) the large magnetic drift of fast ions across a wide range of magnetic surfaces due to a low plasma current, and (ii) resonant interactions between the fast ions and magnetic islands that were induced inside the plasma by the external RMP field. These effects are expected to play an important role in present-day tokamaks.
Yakoumakis, E; Dimitriadis, A; Gialousis, G; Makri, T; Karavasilis, E; Yakoumakis, N; Georgiou, E
2015-02-01
Radiation protection and estimation of the radiological risk in paediatric radiology is essential due to children's significant radiosensitivity and their greater overall health risk. The purpose of this study was to estimate the organ and effective doses of paediatric patients undergoing barium meal (BM) examinations and also to evaluate the assessment of radiation Risk of Exposure Induced cancer Death (REID) to paediatric patients undergoing BM examinations. During the BM studies, fluoroscopy and multiple radiographs are involved. Since direct measurements of the dose in each organ are very difficult if possible at all, clinical measurements of dose-area products (DAPs) and the PCXMC 2.0 Monte Carlo code were involved. In clinical measurements, DAPs were assessed during examination of 51 patients undergoing BM examinations, separated almost equally in three age categories, neonatal, 1- and 5-y old. Organs receiving the highest amounts of radiation during BM examinations were as follows: the stomach (10.4, 10.2 and 11.1 mGy), the gall bladder (7.1, 5.8 and 5.2 mGy) and the spleen (7.5, 8.2 and 4.3 mGy). The three values in the brackets correspond to neonatal, 1- and 5-y-old patients, respectively. For all ages, the main contributors to the total organ and effective doses are the fluoroscopy projections. The average DAP values and absorbed doses to patient were higher for the left lateral projections. The REID was calculated for boys (4.8 × 10(-2), 3.0 × 10(-2) and 2.0 × 10(-2) %) for neonatal, 1- and 5-y old patients, respectively. The corresponding values for girl patients were calculated (12.1 × 10(-2), 5.5 × 10(-2) and 3.4 × 10(-2) %).
Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
Abramov, B. M.; Alekseev, P. N.; Borodin, Yu. A.; Bulychjov, S. A.; Dukhovskoy, I. A.; Krutenkova, A. P.; Martemianov, M. A.; Matsyuk, M. A.; Turdakina, E. N.; Khanov, A. I.; Mashnik, Stepan Georgievich
2015-02-03
Momentum spectra of hydrogen isotopes have been measured at 3.5° from ^{12}C fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
NASA Astrophysics Data System (ADS)
Tani, K.; Shinohara, K.; Oikawa, T.; Tsutsui, H.; McClements, K. G.; Akers, R. J.; Liu, Y. Q.; Suzuki, M.; Ide, S.; Kusama, Y.; Tsuji-Iio, S.
2016-11-01
As part of the verification and validation of a newly developed non-steady-state orbit-following Monte-Carlo code, application studies of time dependent neutron rates have been made for a specific shot in the Mega Amp Spherical Tokamak (MAST) using 3D fields representing vacuum resonant magnetic perturbations (RMPs) and toroidal field (TF) ripples. The time evolution of density, temperature and rotation rate in the application of the code to MAST are taken directly from experiment. The calculation results approximately agree with the experimental data. It is also found that a full orbit-following scheme is essential to reproduce the neutron rates in MAST.
Górka, B; Nilsson, B; Fernández-Varea, J M; Svensson, R; Brahme, A
2006-08-07
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Capogni, M; Lo Meo, S; Fazio, A
2010-01-01
Two CERN Monte Carlo codes, i.e. GEANT3.21 and GEANT4, were compared. The specific routine (sch2for), implemented in GEANT3.21 to simulate a disintegration process, and the G4RadioactiveDecay class, provided by GEANT4, were used for the computation of the full-energy-peak and total efficiencies of several radionuclides. No reference to experimental data was involved. A level of agreement better than 1% for the total efficiency and a deviation lower than 3.5% for the full-energy-peak efficiencies were found.
NASA Astrophysics Data System (ADS)
Wood, Kenneth; Whitney, Barbara A.; Robitaille, Thomas; Draine, Bruce T.
2008-12-01
We have modeled optical to far-infrared images, photometry, and spectroscopy of the object known as Gomez's Hamburger. We reproduce the images and spectrum with an edge-on disk of mass 0.3 M⊙ and radius 1600 AU, surrounding an A0 III star at a distance of 280 pc. Our mass estimate is in excellent agreement with recent CO observations. However, our distance determination is more than an order of magnitude smaller than previous analyses, which inaccurately interpreted the optical spectrum. To accurately model the infrared spectrum we have extended our Monte Carlo radiation transfer codes to include emission from polycyclic aromatic hydrocarbon (PAH) molecules and very small grains (VSG). We do this using precomputed PAH/VSG emissivity files for a wide range of values of the mean intensity of the exciting radiation field. When Monte Carlo energy packets are absorbed by PAHs/VSGs, we reprocess them to other wavelengths by sampling from the emissivity files, thus simulating the absorption and reemission process without reproducing lengthy computations of statistical equilibrium, excitation, and de-excitation in the complex many-level molecules. Using emissivity lookup tables in our Monte Carlo codes gives us the flexibility to use the latest grain physics calculations of PAH/VSG emissivity and opacity that are being continually updated in the light of higher resolution infrared spectra. We find our approach gives a good representation of the observed PAH spectrum from the disk of Gomez's Hamburger. Our models also indicate that the PAHs/VSGs in the disk have a larger scale height than larger radiative equilibrium grains, providing evidence for dust coagulation and settling to the midplane.
Okino, Hiroki; Hayashi, Hiroaki; Nakagawa, Kohei; Takegami, Kazuki
2014-12-01
An X-ray spectrum measured with CdTe detector has to be corrected with response function, because the spectrum is composed of full energy peaks (FEP) and escape peaks (EP). Recently, various simulation codes were developed, and using them the response functions can be calculated easily. The aim of this study is to propose a new method for measuring the response function and to compare it with the calculated value by the Monte Carlo simulation code. In this study, characteristic X-rays were used for measuring the response function. These X-rays were produced by the irradiation of diagnostic X-rays with metallic atoms. In the measured spectrum, there was a background contamination, which was caused by the Compton scattering of the irradiated X-ray in the sample material. Therefore, we thought of a new experimental methodology to reduce this background. The experimentally derived spectrum was analyzed and then the ratios of EP divided by FEP (EP/FEP) were calculated to compare the simulated values. In this article, we showed the property of the measured response functions and the analysis accuracy of the EP/FEP, and we indicated that the values calculated by Monte Carlo simulation code could be evaluated by using our method.
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Tam, Ka-Ming; Thakur, Bhupender; Yun, Zhifeng; Tomko, Karen; Moreno, Juana; Ramanujam, Jagannathan; Jarrell, Mark
2013-03-01
The Edwards Anderson model is a typical example of random frustrated system. It has been a long standing problem in computational physics due to its long relaxation time. Some important properties of the low temperature spin glass phase are still poorly understood after decades of study. The recent advances of GPU computing provide a new opportunity to substantially improve the simulations. We developed an MPI-CUDA hybrid code with multi-spin coding for parallel tempering Monte Carlo simulation of Edwards Anderson model. Since the system size is relatively small, and a large number of parallel replicas and Monte Carlo moves are required, the problem suits well for modern GPUs with CUDA architecture. We use the code to perform an extensive simulation on the three-dimensional Edwards Anderson model with an external field. This work is funded by the NSF EPSCoR LA-SiGMA project under award number EPS-1003897. This work is partly done on the machines of Ohio Supercomputer Center.
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code.
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-10-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin-Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy.
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-01-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin–Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy. PMID:25426430
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Jabbari, Keivan
A fast and accurate treatment planning system is essential for radiation therapy and Monte Carlo (MC) techniques produce the most accurate results for dose calculation in treatment planning. In this work, we developed a fast Monte Carlo code based on pre-calculated data (PMC, Pre-calculated Monte Carlo) for applications in radiation therapy treatment planning. The PMC code takes advantage of large available memory in current computer hardware for extensive generation of pre-calculated data. Primary tracks of electrons are generated in the middle of homogeneous materials (water, air, bone, lung) and with energies between 0.2 and 18 MeV using the EGSnrc code. Secondary electrons are not transported but their position, energy, charge and direction are saved and used as a primary particle. Based on medium type and incident electron energy, a track is selected from the pre-calculated set. The performance of the method is tested in various homogeneous and heterogeneous configurations and the results were generally within 2% compared to EGSnrc but with a 40-60 times speed improvement. The limitations of various techniques for the improvement of speed and accuracy of particle transport have been evaluated. We studied the obstacles for further increased speed ups in voxel based geometries by including ray-tracing and particle fluence information in the pre-generated track information. The latter method leads to speed-increases of about a factor of 500 over EGSnrc for voxel-based geometries. In both approaches, no physical calculation is carried out during the runtime phase after the pre-generated data has been stored even in the presence of heterogeneities. The pre-calculated data is generated for each particular material and this improves the performance of the pre-calculated Monte Carlo code both in terms of accuracy and speed. The PMC is also extended for proton transport in radiation therapy. The pre-calculated data is based on tracks of 1000 primary protons using
NASA Astrophysics Data System (ADS)
Malouch, Fadhel
2016-02-01
An irradiation program DV50 was carried out from 2002 to 2006 in the OSIRIS material testing reactor (CEA-Saclay center) to assess the pressure vessel steel toughness curve for a fast neutron fluence (E > 1 MeV) equivalent to a French 900-MWe PWR lifetime of 50 years. This program allowed the irradiation of 120 specimens out of vessel steel, subdivided in two successive irradiations DV50 n∘1 and DV50 n∘2. To measure the fast neutron fluence (E > 1 MeV) received by specimens after each irradiation, sample holders were equipped with activation foils that were withdrawn at the end of irradiation for activity counting and processing. The fast effective cross-sections used in the dosimeter processing were determined with a specific calculation scheme based on the Monte-Carlo code TRIPOLI-3 (and the nuclear data ENDF/B-VI and IRDF-90). In order to put vessel-steel experiments at the same standard, a new dosimetric interpretation of the DV50 experiment has been performed by using the Monte-Carlo code TRIPOLI-4 and more recent nuclear data (JEFF3.1.1 and IRDF-2002). This paper presents a comparison of previous and recent calculations performed for the DV50 vessel-steel experiment to assess the impact on the dosimetric interpretation.
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2013-11-21
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2014-01-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images
NASA Astrophysics Data System (ADS)
Botta, F.; Mairani, A.; Hobbs, R. F.; Vergara Gil, A.; Pacilio, M.; Parodi, K.; Cremonesi, M.; Coca Pérez, M. A.; Di Dia, A.; Ferrari, M.; Guerriero, F.; Battistoni, G.; Pedroli, G.; Paganelli, G.; Torres Aroche, L. A.; Sgouros, G.
2013-11-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3-4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
Cho, Sang Hyun; Vassiliev, Oleg N; Horton, John L
2007-03-01
An event-by-event Monte Carlo code called NOREC, a substantially improved version of the Oak Ridge electron transport code (OREC), was released in 2003, after a number of modifications to OREC. In spite of some earlier work, the characteristics of the code have not been clearly shown so far, especially for a wide range of electron energies. Therefore, NOREC was used in this study to generate one of the popular dosimetric quantities, the scaled point kernel, for a number of electron energies between 0.02 and 1.0 MeV. Calculated kernels were compared with the most well-known published kernels based on a condensed history Monte Carlo code, ETRAN, to show not only general agreement between the codes for the electron energy range considered but also possible differences between an event-by-event code and a condensed history code. There was general agreement between the kernels within about 5% up to 0.7 r/r (0) for 100 keV and 1 MeV electrons. Note that r/r (0) denotes the scaled distance, where r is the radial distance from the source to the dose point and r (0) is the continuous slowing down approximation (CSDA) range of a mono-energetic electron. For the same range of scaled distances, the discrepancies for 20 and 500 keV electrons were up to 6 and 12%, respectively. Especially, there was more pronounced disagreement for 500 keV electrons than for 20 keV electrons. The degree of disagreement for 500 keV electrons decreased when NOREC results were compared with published EGS4/PRESTA results, producing similar agreement to other electron energies.
NASA Astrophysics Data System (ADS)
Roomi, A.; Habibi, M.; Saion, E.; Amrollahi, R.
2011-02-01
In this study we present Monte Carlo method for obtaining the time-resolved energy spectra of neutrons emitted by D-D reaction in plasma focus devices. Angular positions of detectors obtained to maximum reconstruction of neutron spectrum. The detectors were arranged over a range of 0-22.5 m from the source and also at 0°, 30°, 60°, and 90° with respect to the central axis. The results show that an arrangement with five detectors placed at 0, 2, 7.5, 15 and 22.5 m around the central electrode of plasma focus as an anisotropic neutron source is required. As it shown in reconstructed spectrum, the distance between the neutron source and detectors is reduced and also the final reconstructed signal obtained with a very fine accuracy.
Sarrut, David; Bardiès, Manuel; Boussion, Nicolas; Freud, Nicolas; Jan, Sébastien; Létang, Jean-Michel; Loudos, George; Maigne, Lydia; Marcatili, Sara; Mauxion, Thibault; Papadimitroulas, Panagiotis; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; Schaart, Dennis R; Visvikis, Dimitris; Buvat, Irène
2014-06-01
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same framework is emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Sarrut, David; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Létang, Jean-Michel; Jan, Sébastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
NASA Astrophysics Data System (ADS)
Cho, Gyu-Seok; Kim, Kum-Bae; Choi, Sang-Hyoun; Song, Yong-Keun; Lee, Soon-Sung
2017-01-01
Recently, Monte Carlo methods have been used to optimize the design and modeling of radiation detectors. However, most Monte Carlo codes have a fixed and simple optical physics, and the effect of the signal readout devices is not considered because of the limitations of the geometry function. Therefore, the disadvantages of the codes prevent the modeling of the scintillator detector. The modeling of a comprehensive and extensive detector system has been reported to be feasible when the optical physics model of the GEomerty ANd Tracking 4 (GEANT 4) simulation code is used. In this study, we performed a Gd2O3:Eu scintillator modelling by using the GEANT4 simulation code and compared the results with the measurement data. To obtain the measurement data for the scintillator, we synthesized the Gd2O3:Eu scintillator by using solution combustion method and we evaluated the characteristics of the scintillator by using X-ray diffraction and photoluminescence. We imported the measured data into the GEANT4 code because GEANT4 cannot simulate a fluorescence phenomenon. The imported data were used as an energy distribution for optical photon generation based on the energy deposited in the scintillator. As a result of the simulation, a strong emission peak consistent with the measured data was observed at 611 nm, and the overall trends of the spectrum agreed with the measured data. This result is significant because the characteristics of the scintillator are equally implemented in the simulation, indicating a valuable improvement in the modeling of scintillator-based radiation detectors.
State-of-the-art Monte Carlo 1988
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
GOORLEY, TIM
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
GOORLEY, TIM
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
Champion, C; Incerti, S; Perrot, Y; Delorme, R; Bordage, M C; Bardiès, M; Mascialino, B; Tran, H N; Ivanchenko, V; Bernal, M; Francis, Z; Groetz, J-E; Fromm, M; Campos, L
2014-01-01
Modeling the radio-induced effects in biological medium still requires accurate physics models to describe the interactions induced by all the charged particles present in the irradiated medium in detail. These interactions include inelastic as well as elastic processes. To check the accuracy of the very low energy models recently implemented into the GEANT4 toolkit for modeling the electron slowing-down in liquid water, the simulation of electron dose point kernels remains the preferential test. In this context, we here report normalized radial dose profiles, for mono-energetic point sources, computed in liquid water by using the very low energy "GEANT4-DNA" physics processes available in the GEANT4 toolkit. In the present study, we report an extensive intra-comparison of profiles obtained by a large selection of existing and well-documented Monte-Carlo codes, namely, EGSnrc, PENELOPE, CPA100, FLUKA and MCNPX.
NASA Astrophysics Data System (ADS)
Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel
2017-03-01
The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.
Doerner, Edgardo; Caprile, Paola
2016-09-01
The shape of the radiation source of a linac has a direct impact on the delivered dose distributions, especially in the case of small radiation fields. Traditionally, a single Gaussian source model is used to describe the electron beam hitting the target, although different studies have shown that the shape of the electron source can be better described by a mixed distribution consisting of two Gaussian components. Therefore, this study presents the implementation of a double Gaussian source model into the BEAMnrc Monte Carlo code. The impact of the double Gaussian source model for a 6 MV beam is assessed through the comparison of different dosimetric parameters calculated using a single Gaussian source, previously commissioned, the new double Gaussian source model and measurements, performed with a diode detector in a water phantom. It was found that the new source can be easily implemented into the BEAMnrc code and that it improves the agreement between measurements and simulations for small radiation fields. The impact of the change in source shape becomes less important as the field size increases and for increasing distance of the collimators to the source, as expected. In particular, for radiation fields delivered using stereotactic collimators located at a distance of 59 cm from the source, it was found that the effect of the double Gaussian source on the calculated dose distributions is negligible, even for radiation fields smaller than 5 mm in diameter. Accurate determination of the shape of the radiation source allows us to improve the Monte Carlo modeling of the linac, especially for treatment modalities such as IMRT, were the radiation beams used could be very narrow, becoming more sensitive to the shape of the source. PACS number(s): 87.53.Bn, 87.55.K, 87.56.B-, 87.56.jf.
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe
2015-01-01
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We
NASA Astrophysics Data System (ADS)
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.
2014-06-01
Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high
Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.
2006-07-01
Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)
Descalle, M; Pruet, J
2008-06-09
Livermore's nuclear data group developed a new verification and validation test suite to ensure the quality of data used in application codes. This is based on models of LLNL's pulsed sphere fusion shielding benchmark experiments. Simulations were done with Mercury, a 3D particle transport Monte Carlo code using continuous-energy cross-section libraries. Results were compared to measurements of neutron leakage spectra generated by 14MeV neutrons in 17 target assemblies (for a blank target assembly, H{sub 2}O, Teflon, C, N{sub 2}, Al, Si, Ti, Fe, Cu, Ta, W, Au, Pb, {sup 232}Th, {sup 235}U, {sup 238}U, and {sup 239}Pu). We also tested the fidelity of simulations for photon production associated with neutron interactions in the different materials. Gamma-ray leakage energy per neutron was obtained from a simple 1D spherical geometry assembly and compared to three codes (TART, COG, MCNP5) and several versions of the Evaluated Nuclear Data File (ENDF) and Evaluated Nuclear Data Libraries (ENDL) cross-section libraries. These tests uncovered a number of errors in photon production cross-sections, and were instrumental to the V&V of different cross-section libraries. Development of the pulsed sphere tests also uncovered the need for new Mercury capabilities. To enable simulations of neutron time-of-flight experiments the nuclear data group implemented an improved treatment of biased angular scattering in MCAPM.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
White, Morgan C.
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-01-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477
NASA Astrophysics Data System (ADS)
Krynicka, Ewa; Wiącek, Urszula; Drozdowicz, Krzysztof; Gabańska, Barbara; Tracz, Grzegorz
2006-09-01
A comparison of real and Monte Carlo simulated pulsed neutron experiments in two-zone cylindrical systems is presented. Such geometry is met when a neutron moderator surrounds a sample of the investigated material. In this study, a Plexiglas shell (hydrogenous medium) surrounds the inner zone filled with a non-hydrogenous medium: copper oxide or chrome oxide. The time decay constant of the thermal neutron flux is determined as the result of the experiment. The primary simulations have been made using the MCNP code with the attached standard thermal neutron scattering library for hydrogen in polyethylene (poly.01t). A modification of this library is proposed to obtain the data dedicated more precisely for scattering of neutrons on hydrogen in Plexiglas in the thermal energy region. Results of the simulations for two-zone cylindrical systems, using the MCNP code with the modified hydrogen-data library, show a considerably better agreement with the experimental results. The average relative deviations have decreased from about 2% (always positive) to less than 0.5% fluctuating around zero. Adequacy of the applied modification is also confirmed in simulations of the pulsed neutron experiments on homogeneous cylinders of Plexiglas.
Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0
Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.
1996-10-01
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.
Karalidi, Theodora; Apai, Dániel; Schneider, Glenn; Hanson, Jake R.; Pasachoff, Jay M.
2015-11-20
Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of a giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.
Robert, C; Dedes, G; Battistoni, G; Böhlen, T T; Buvat, I; Cerutti, F; Chin, M P W; Ferrari, A; Gueth, P; Kurz, C; Lestand, L; Mairani, A; Montarou, G; Nicolini, R; Ortega, P G; Parodi, K; Prezado, Y; Sala, P R; Sarrut, D; Testa, E
2013-05-07
Monte Carlo simulations play a crucial role for in-vivo treatment monitoring based on PET and prompt gamma imaging in proton and carbon-ion therapies. The accuracy of the nuclear fragmentation models implemented in these codes might affect the quality of the treatment verification. In this paper, we investigate the nuclear models implemented in GATE/Geant4 and FLUKA by comparing the angular and energy distributions of secondary particles exiting a homogeneous target of PMMA. Comparison results were restricted to fragmentation of (16)O and (12)C. Despite the very simple target and set-up, substantial discrepancies were observed between the two codes. For instance, the number of high energy (>1 MeV) prompt gammas exiting the target was about twice as large with GATE/Geant4 than with FLUKA both for proton and carbon ion beams. Such differences were not observed for the predicted annihilation photon production yields, for which ratios of 1.09 and 1.20 were obtained between GATE and FLUKA for the proton beam and the carbon ion beam, respectively. For neutrons and protons, discrepancies from 14% (exiting protons-carbon ion beam) to 57% (exiting neutrons-proton beam) have been identified in production yields as well as in the energy spectra for neutrons.
Challenges of Monte Carlo Transport
Long, Alex Roberts
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe; Bronk, Lawrence; Geng, Changran; Grosshans, David
2015-11-15
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to
Valentine, T.E.; Mihalczo, J.T.
1995-12-31
This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code.
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.
El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H
2008-09-01
Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.
NASA Astrophysics Data System (ADS)
Ezzati, A. O.; Sohrabpour, M.
2013-02-01
In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.
NASA Astrophysics Data System (ADS)
Chao, T. C.; Xu, X. G.
2001-04-01
VIP-Man is a whole-body anatomical model newly developed at Rensselaer from the high-resolution colour images of the National Library of Medicine's Visible Human Project. This paper summarizes the use of VIP-Man and the Monte Carlo method to calculate specific absorbed fractions from internal electron emitters. A specially designed EGS4 user code, named EGS4-VLSI, was developed to use the extremely large number of image data contained in the VIP-Man. Monoenergetic and isotropic electron emitters with energies from 100 keV to 4 MeV are considered to be uniformly distributed in 26 organs. This paper presents, for the first time, results of internal electron exposures based on a realistic whole-body tomographic model. Because VIP-Man has many organs and tissues that were previously not well defined (or not available) in other models, the efforts at Rensselaer and elsewhere bring an unprecedented opportunity to significantly improve the internal dosimetry.
Bobin, C; Thiam, C; Bouchard, J
2016-03-01
At LNE-LNHB, a liquid scintillation (LS) detection setup designed for Triple to Double Coincidence Ratio (TDCR) measurements is also used in the β-channel of a 4π(LS)β-γ coincidence system. This LS counter based on 3 photomultipliers was first modeled using the Monte Carlo code Geant4 to enable the simulation of optical photons produced by scintillation and Cerenkov effects. This stochastic modeling was especially designed for the calculation of double and triple coincidences between photomultipliers in TDCR measurements. In the present paper, this TDCR-Geant4 model is extended to 4π(LS)β-γ coincidence counting to enable the simulation of the efficiency-extrapolation technique by the addition of a γ-channel. This simulation tool aims at the prediction of systematic biases in activity determination due to eventual non-linearity of efficiency-extrapolation curves. First results are described in the case of the standardization (59)Fe. The variation of the γ-efficiency in the β-channel due to the Cerenkov emission is investigated in the case of the activity measurements of (54)Mn. The problem of the non-linearity between β-efficiencies is featured in the case of the efficiency tracing technique for the activity measurements of (14)C using (60)Co as a tracer.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
NASA Astrophysics Data System (ADS)
Mashnik, Stepan G.; Kerby, Leslie M.; Gudima, Konstantin K.; Sierk, Arnold J.; Bull, Jeffrey S.; James, Michael R.
2017-03-01
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N -particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM) used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤7 , in the case of CEM, and A ≤12 , in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Finally, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
NAGAYA, YASANOBU
2008-02-29
Version 00 (1) Problems to be solved: MVP/GMVP II can solve eigenvalue and fixed-source problems. The multigroup code GMVP can solve forward and adjoint problems for neutron, photon and neutron-photon coupled transport. The continuous-energy code MVP can solve only the forward problems. Both codes can also perform time-dependent calculations. (2) Geometry description: MVP/GMVP employs combinatorial geometry to describe the calculation geometry. It describes spatial regions by the combination of the 3-dimensional objects (BODIes). Currently, the following objects (BODIes) can be used. - BODIes with linear surfaces : half space, parallelepiped, right parallelepiped, wedge, right hexagonal prism - BODIes with quadratic surface and linear surfaces : cylinder, sphere, truncated right cone, truncated elliptic cone, ellipsoid by rotation, general ellipsoid - Arbitrary quadratic surface and torus The rectangular and hexagonal lattice geometry can be used to describe the repeated geometry. Furthermore, the statistical geometry model is available to treat coated fuel particles or pebbles for high temperature reactors. (3) Particle sources: The various forms of energy-, angle-, space- and time-dependent distribution functions can be specified. See Abstract for more detail.
NASA Astrophysics Data System (ADS)
Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej
2016-08-01
The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.
Carver, D; Kost, S; Pickens, D; Price, R; Stabin, M
2014-06-15
Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width of 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Studying the response of a plastic scintillator to gamma rays using the Geant4 Monte Carlo code.
Ghadiri, Rasoul; Khorsandi, Jamshid
2015-05-01
To determine the gamma ray response function of an NE-102 scintillator and to investigate the gamma spectra due to the transport of optical photons, we simulated an NE-102 scintillator using Geant4 code. The results of the simulation were compared with experimental data. Good consistency between the simulation and data was observed. In addition, the time and spatial distributions, along with the energy distribution and surface treatments of scintillation detectors, were calculated. This simulation makes us capable of optimizing the photomultiplier tube (or photodiodes) position to yield the best coupling to the detector.
Santoro, R.T.; Barnes, J.M.; Soran, P.D.; Alsmiller, R.G. Jr.
1982-11-01
Neutron and gamma-ray energy spectra resulting from the streaming of 14 MeV neutrons through a 0.30-m-diameter duct (length-to-diameter ratio = 2.83) have been calculated using the Monte Carlo code MCNP. The calculated spectra are compared with measured data and data calculated previously using a combination of discrete ordinates and Monte Carlo methods. Comparisons are made at twelve detector locations on and off the duct axis for neutrons with energies above 850 keV and for gamma rays with energies above 750 keV. The neutron spectra calculated using MCNP agree with the measured data within approx. 5 to approx. 50%, depending on detector location and neutron energy. Agreement with the measured gamma-ray spectra is also within approx. 5 to approx. 50%. The spectra obtained with MCNP are also in favorable agreement with the previously calculated data and were obtained with less calculational effort.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
Fission Matrix Capability for MCNP Monte Carlo
NASA Astrophysics Data System (ADS)
Brown, Forrest; Carney, Sean; Kiedrowski, Brian; Martin, William
2014-06-01
We describe recent experience and results from implementing a fission matrix capability into the MCNP Monte Carlo code. The fission matrix can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission neutron source distribution. It can also be used to accelerate the convergence of the power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used in MCNP to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. Past difficulties and limitations of the fission matrix approach are overcome with a new sparse representation of the matrix, permitting much larger and more accurate fission matrix representations. The new fission matrix capabilities provide a significant advance in the state-of-the-art for Monte Carlo criticality calculations.
Monte Carlo next-event estimates from thermal collisions
Hendricks, J.S.; Prael, R.E.
1990-01-01
A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in Monte Carlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production Monte Carlo neutron and photon radiation transport code MCNP. 9 refs.
CosmoPMC: Cosmology sampling with Population Monte Carlo
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren
2012-12-01
CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.
Ferretti, A; Martignano, A; Simonato, F; Paiusco, M
2014-02-01
The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium".
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
Verhaegen, Frank
2002-05-21
High atomic number (Z) heterogeneities in tissue exposed to photons with energies of up to about 1 MeV can cause significant dose perturbations in their immediate vicinity. The recently released Monte Carlo (MC) code EGSnrc (Kawrakow 2000a Med. Phys. 27 485-98) was used to investigate the dose perturbation of high-Z heterogeneities in tissue in kilovolt (kV) and 60Co photon beams. Simulations were performed of measurements with a dedicated thin-window parallel-plate ion chamber near a high-Z interface in a 60Co photon beam (Nilsson et al 1992 Med. Phys. 19 1413-21). Good agreement was obtained between simulations and measurements for a detailed set of experiments in which the thickness of the ion chamber window, the thickness of the air gap between ion chamber and heterogeneity, the depth of the ion chamber in polystyrene and the material of the interface was varied. The EGSnrc code offers several improvements in the electron and photon production and transport algorithms over the older EGS4/PRESTA code (Nelson et al 1985 Stanford Linear Accelerator Center Report SLAC-265. Bielajew and Rogers 1987 Nucl. Instrum. Methods Phys. Res. B 18 165-81). The influence of the new EGSnrc features was investigated for simulations of a planar slab of a high-Z medium embedded in water and exposed to kV or 60Co photons. It was found that using the new electron transport algorithm in EGSnrc, including relativistic spin effects in elastic scattering, significantly affects the calculation of dose distribution near high-Z interfaces. The simulations were found to be independent of the maximum fractional electron energy loss per step (ESTEPE), which was often a cause for concern in older EGS4 simulations. Concerning the new features of the photon transport algorithm sampling of the photoelectron angular distribution was found to have a significant effect, whereas the effect of binding energies in Compton scatter was found to be negligible. A slight dose artefact very close to high
Hajizadeh-Safar, M; Ghorbani, M; Khoshkharam, S; Ashrafi, Z
2014-07-01
Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc(99_m) activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using
NASA Astrophysics Data System (ADS)
Ficaro, Edward Patrick
The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of
Caruso, A.; Cherubini, S.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Crucillà, V.; Gulino, M.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; and others
2015-02-24
Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called 'narrow systems' because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of 'hot hydrogen burning' are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as {sup 13}N and {sup 18}F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of {sup 18}F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of {sup 18}F. Among these, the {sup 18}F(p,α){sup 15}O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the {sup 18}F(p,α){sup 15}O reaction, using a beam of {sup 18}F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code
Mirzakhanian, L; Enger, S; Giusti, V
2015-06-15
Purpose: A major concern in proton therapy is the production of secondary neutrons causing secondary cancers, especially in young adults and children. Most utilized Monte Carlo codes in proton therapy are Geant4 and MCNP. However, the default versions of Geant4 and MCNP6 do not have suitable cross sections or physical models to properly handle secondary particle production in proton energy ranges used for therapy. In this study, default versions of Geant4 and MCNP6 were modified to better handle production of secondaries by adding the TENDL-2012 cross-section library. Methods: In-water proton depth-dose was measured at the “The Svedberg Laboratory” in Uppsala (Sweden). The proton beam was mono-energetic with mean energy of 178.25±0.2 MeV. The measurement set-up was simulated by Geant4 version 10.00 (default and modified version) and MCNP6. Proton depth-dose, primary and secondary particle fluence and neutron equivalent dose were calculated. In case of Geant4, the secondary particle fluence was filtered by all the physics processes to identify the main process responsible for the difference between the default and modified version. Results: The proton depth-dose curves and primary proton fluence show a good agreement between both Geant4 versions and MCNP6. With respect to the modified version, default Geant4 underestimates the production of secondary neutrons while overestimates that of gammas. The “ProtonInElastic” process was identified as the main responsible process for the difference between the two versions. MCNP6 shows higher neutron production and lower gamma production than both Geant4 versions. Conclusion: Despite the good agreement on the proton depth dose curve and primary proton fluence, there is a significant discrepancy on secondary neutron production between MCNP6 and both versions of Geant4. Further studies are thus in order to find the possible cause of this discrepancy or more accurate cross-sections/models to handle the nuclear
NASA Astrophysics Data System (ADS)
Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.
2016-07-01
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
NASA Astrophysics Data System (ADS)
Caruso, A.; Cherubini, S.; Spitaleri, C.; Crucillà, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; de Séréville, N.
2015-02-01
Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called "narrow systems" because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of "hot hydrogen burning" are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as 13N and 18F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of 18F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of 18F . Among these, the 18F(p,α)15O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the 18F(p,α)15O reaction, using a beam of 18F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code developed to be used in the data analysis process.
Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D
2016-07-21
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Monte Carlo tests of the ELIPGRID-PC algorithm
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-06-15
Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.
Marcus, Ryan C.
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
A Monte Carlo Method for Multi-Objective Correlated Geometric Optimization
2014-05-01
PAGES 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 May 2014 Final A Monte Carlo Method for...requiring computationally intensive algorithms for optimization. This report presents a method developed for solving such systems using a Monte Carlo...performs a Monte Carlo optimization to provide geospatial intelligence on entity placement using OpenCL framework. The solutions for optimal
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations
Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C
2011-01-01
It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.
Chemical application of diffusion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Reynolds, P. J.; Lester, W. A., Jr.
1983-10-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra
Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor
2009-09-15
The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
An unbiased Hessian representation for Monte Carlo PDFs.
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
Parallel tempering Monte Carlo in LAMMPS.
Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.
2003-11-01
We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.
A review of best practices for Monte Carlo criticality calculations
Brown, Forrest B
2009-01-01
Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
NASA Astrophysics Data System (ADS)
Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.
2014-06-01
The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.
1989-08-01
Code System (VCS) User’s Manual , Oak Ridge National Laboratory, ORNL-TM-4648 (1974). (UNCLASSIFIED) 3. F.R. Mynatt , F.J. Muckenthaler and P.N...and L.M. Petrie, Vehicle Code System (VCS) User’s Manual , Oak Ridge National Laboratory, ORNL-TM-4648 (1974). (UNCLASSIFIED) 3. F.R. Mynatt , F.J
NASA Astrophysics Data System (ADS)
Cochran, Thomas
2007-04-01
In 2002 and again in 2003, an investigative journalist unit at ABC News transported a 6.8 kilogram metallic slug of depleted uranium (DU) via shipping container from Istanbul, Turkey to Brooklyn, NY and from Jakarta, Indonesia to Long Beach, CA. Targeted inspection of these shipping containers by Department of Homeland Security (DHS) personnel, included the use of gamma-ray imaging, portal monitors and hand-held radiation detectors, did not uncover the hidden DU. Monte Carlo analysis of the gamma-ray intensity and spectrum of a DU slug and one consisting of highly-enriched uranium (HEU) showed that DU was a proper surrogate for testing the ability of DHS to detect the illicit transport of HEU. Our analysis using MCNP-5 illustrated the ease of fully shielding an HEU sample to avoid detection. The assembly of an Improvised Nuclear Device (IND) -- a crude atomic bomb -- from sub-critical pieces of HEU metal was then examined via Monte Carlo criticality calculations. Nuclear explosive yields of such an IND as a function of the speed of assembly of the sub-critical HEU components were derived. A comparison was made between the more rapid assembly of sub-critical pieces of HEU in the ``Little Boy'' (Hiroshima) weapon's gun barrel and gravity assembly (i.e., dropping one sub-critical piece of HEU on another from a specified height). Based on the difficulty of detection of HEU and the straightforward construction of an IND utilizing HEU, current U.S. government policy must be modified to more urgently prioritize elimination of and securing the global inventories of HEU.
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
Monte Carlo Estimation of the Electric Field in Stellarators
NASA Astrophysics Data System (ADS)
Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.
1986-10-01
The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable.
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Hybrid Monte Carlo/deterministic methods for radiation shielding problems
NASA Astrophysics Data System (ADS)
Becker, Troy L.
For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-03-01
Conversion coefficients have been calculated for fluence to absorbed dose, fluence to effective dose and fluence to gray equivalent, for isotropic exposure to alpha particles in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). The coefficients were calculated using Monte Carlo transport code MCNPX 2.7.A and BodyBuilder 1.3 anthropomorphic phantoms modified to allow calculation of effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Coefficients for effective dose are within 30 % of those calculated using ICRP 1990 recommendations.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-03-01
Conversion coefficients have been calculated for fluence-to-absorbed dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult male and an adult female to (56)Fe(26+) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). The coefficients were calculated using Monte Carlo transport code MCNPX 2.7.A and BodyBuilder 1.3 anthropomorphic phantoms modified to allow calculation of effective dose using tissues and tissue weighting factors from either the 1990 or 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Calculations using ICRP 2007 recommendations result in fluence-to-effective dose conversion coefficients that are almost identical at most energies to those calculated using ICRP 1990 recommendations.
NASA Technical Reports Server (NTRS)
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
Monte Carlo calculation for microplanar beam radiography.
Company, F Z; Allen, B J; Mino, C
2000-09-01
In radiography the scattered radiation from the off-target region decreases the contrast of the target image. We propose that a bundle of collimated, closely spaced, microplanar beams can reduce the scattered radiation and eliminate the effect of secondary electron dose, thus increasing the image dose contrast in the detector. The lateral and depth dose distributions of 20-200 keV microplanar beams are investigated using the EGS4 Monte Carlo code to calculate the depth doses and dose profiles in a 6 cm x 6 cm x 6 cm tissue phantom. The maximum dose on the primary beam axis (peak) and the minimum inter-beam scattered dose (valley) are compared at different photon energies and the optimum energy range for microbeam radiography is found. Results show that a bundle of closely spaced microplanar beams can give superior contrast imaging to a single macrobeam of the same overall area.
Markov Chain Monte Carlo from Lagrangian Dynamics
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2014-01-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515
NASA Astrophysics Data System (ADS)
Iwamoto, Yosuke; Ogawa, Tatsuhiko
2017-04-01
Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for 72Ge, 75As, 89Y, and 109Ag in the ENDF/B-VII.1 library, and for 90Zr and 55Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.
Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.
Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald
2016-12-01
Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-06-15
Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in a typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.
An Overview of the Monte Carlo Application ToolKit (MCATK)
Trahan, Travis John
2016-01-07
MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy.
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn
2008-09-07
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc
Brunner, Thomas A.; Kalos, Malvin H.; Gentile, Nicholas A.
2005-03-01
Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results.
Moskvin, V; Tsiamas, P; Axente, M; Farr, J; Stewart, R
2015-06-15
Purpose: One of the more critical initiating events for reproductive cell death is the creation of a DNA double strand break (DSB). In this study, we present a computationally efficient way to determine spatial variations in the relative biological effectiveness (RBE) of proton therapy beams within the FLUKA Monte Carlo (MC) code. Methods: We used the independently tested Monte Carlo Damage Simulation (MCDS) developed by Stewart and colleagues (Radiat. Res. 176, 587–602 2011) to estimate the RBE for DSB induction of monoenergetic protons, tritium, deuterium, hellium-3, hellium-4 ions and delta-electrons. The dose-weighted (RBE) coefficients were incorporated into FLUKA to determine the equivalent {sup 6}°60Co γ-ray dose for representative proton beams incident on cells in an aerobic and anoxic environment. Results: We found that the proton beam RBE for DSB induction at the tip of the Bragg peak, including primary and secondary particles, is close to 1.2. Furthermore, the RBE increases laterally to the beam axis at the area of Bragg peak. At the distal edge, the RBE is in the range from 1.3–1.4 for cells irradiated under aerobic conditions and may be as large as 1.5–1.8 for cells irradiated under anoxic conditions. Across the plateau region, the recorded RBE for DSB induction is 1.02 for aerobic cells and 1.05 for cells irradiated under anoxic conditions. The contribution to total effective dose from secondary heavy ions decreases with depth and is higher at shallow depths (e.g., at the surface of the skin). Conclusion: Multiscale simulation of the RBE for DSB induction provides useful insights into spatial variations in proton RBE within pristine Bragg peaks. This methodology is potentially useful for the biological optimization of proton therapy for the treatment of cancer. The study highlights the need to incorporate spatial variations in proton RBE into proton therapy treatment plans.
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments
NASA Astrophysics Data System (ADS)
Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob
2014-06-01
The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.
Monte Carlo and detector simulation in OOP (Object-Oriented Programming)
Atwood, W.B.; Blankenbecler, R.; Kunz, P. ); Burnett, T.; Storr, K.M. . ECP Div.)
1990-10-01
Object-Oriented Programming techniques are explored with an eye toward applications in High Energy Physics codes. Two prototype examples are given: McOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package).
Bakshi, A K; Chatterjee, S; Palani Selvam, T; Dhabekar, B S
2010-07-01
In the present study, the energy dependence of response of some popular thermoluminescent dosemeters (TLDs) have been investigated such as LiF:Mg,Ti, LiF:Mg,Cu,P and CaSO(4):Dy to synchrotron radiation in the energy range of 10-34 keV. The study utilised experimental, Monte Carlo and analytical methods. The Monte Carlo calculations were based on the EGSnrc and FLUKA codes. The calculated energy response of all the TLDs using the EGSnrc and FLUKA codes shows excellent agreement with each other. The analytically calculated response shows good agreement with the Monte Carlo calculated response in the low-energy region. In the case of CaSO(4):Dy, the Monte Carlo-calculated energy response is smaller by a factor of 3 at all energies in comparison with the experimental response when polytetrafluoroethylene (PTFE) (75 % by wt) is included in the Monte Carlo calculations. When PTFE is ignored in the Monte Carlo calculations, the difference between the calculated and experimental response decreases (both responses are comparable >25 keV). For the LiF-based TLDs, the Monte Carlo-based response shows reasonable agreement with the experimental response.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
Design of composite laminates by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Fang, Chin; Springer, George S.
1993-01-01
A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
Improved numerical techniques for processing Monte Carlo thermal scattering data
Schmidt, E; Rose, P
1980-01-01
As part of a Thermal Benchmark Validation Program sponsored by the Electric Power Research Institute (EPRI), the National Nuclear Data Center has been calculating thermal reactor lattices using the SAM-F Monte Carlo Computer Code. As part of this program a significant improvement has been made in the adequacy of the numerical procedures used to process the thermal differential scattering cross sections for hydrogen bound in H/sub 2/O.
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
Monte Carlo, confidence interval, central limit theorem, number of iterations, Wilson score method, Wald method, normal probability plot 16. SECURITY...Iterations 16 6. Conclusions 17 7. References and Notes 20 Appendix. MATLAB Code to Produce a Normal Probability Plot for Data in Array A 23...for normality can be performed to quantify the confidence level of a normality assumption. The basic idea of an NPP is to plot the sample data in
Extra Chance Generalized Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; Sanz-Serna, J. M.
2015-01-01
We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
NASA Astrophysics Data System (ADS)
Yan, Qiang; Shao, Lin
2017-03-01
Current popular Monte Carlo simulation codes for simulating electron bombardment in solids focus primarily on electron trajectories, instead of electron-induced displacements. Here we report a Monte Carol simulation code, DEEPER (damage creation and particle transport in matter), developed for calculating 3-D distributions of displacements produced by electrons of incident energies up to 900 MeV. Electron elastic scattering is calculated by using full-Mott cross sections for high accuracy, and primary-knock-on-atoms (PKAs)-induced damage cascades are modeled using ZBL potential. We compare and show large differences in 3-D distributions of displacements and electrons in electron-irradiated Fe. The distributions of total displacements are similar to that of PKAs at low electron energies. But they are substantially different for higher energy electrons due to the shifting of PKA energy spectra towards higher energies. The study is important to evaluate electron-induced radiation damage, for the applications using high flux electron beams to intentionally introduce defects and using an electron analysis beam for microstructural characterization of nuclear materials.
Monte Carlo docking with ubiquitin.
Cummings, M. D.; Hart, T. N.; Read, R. J.
1995-01-01
The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS
Roth, Nathaniel; Kasen, Daniel
2015-03-15
We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.
Monte Carlo Ground State Energy for Trapped Boson Systems
NASA Astrophysics Data System (ADS)
Rudd, Ethan; Mehta, N. P.
2012-06-01
Diffusion Monte Carlo (DMC) and Green's Function Monte Carlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0φ 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Mammography X-Ray Spectra Simulated with Monte Carlo
Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A.
2008-08-11
Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.
Monte Carlo modelling of the influence of boron microdistribution on BNCT microdosimetry.
Hugtenburg, Richard P; Baker, Adam E R; Green, Stuart
2009-07-01
The ion transport Monte Carlo code SRIM has been used to calculate single event lineal energy spectra for the products of the boron-neutron capture reaction in a water-based medium. The event spectra have been benchmarked against spectra measured with a boron-loaded tissue-equivalent proportional counter (TEPC). Agreement is excellent and supports the use of Monte Carlo methods in understanding the influence of boron delivery on the effectiveness of boron-neutron capture therapy (BNCT).
Catfish: A Monte Carlo simulator for black holes at the LHC
NASA Astrophysics Data System (ADS)
Cavaglià, M.; Godang, R.; Cremaldi, L.; Summers, D.
2007-09-01
We present a new Fortran Monte Carlo generator to simulate black hole events at CERN's Large Hadron Collider. The generator interfaces to the PYTHIA Monte Carlo fragmentation code. The physics of the BH generator includes, but not limited to, inelasticity effects, exact field emissivities, corrections to semiclassical black hole evaporation and gravitational energy loss at formation. These features are essential to realistically reconstruct the detector response and test different models of black hole formation and decay at the LHC.
O'Rourke, Patrick Francis
2016-10-27
The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-12-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to tritons ((3)H(+)) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Coefficients were calculated using Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and calculation of gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 3%. The greatest difference, 43%, occurred at 30 MeV.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2011-01-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent for isotropic exposure of an adult female and an adult male to deuterons ((2)H(+)) in the energy range 10 MeV-1 TeV (0.01-1000 GeV). Coefficients were calculated using the Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms. Phantoms were modified to allow calculation of the effective dose to a Reference Person using tissues and tissue weighting factors from 1990 and 2007 recommendations of the International Commission on Radiological Protection (ICRP) and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. Coefficients for the equivalent and effective dose incorporated a radiation weighting factor of 2. At 15 of 19 energies for which coefficients for the effective dose were calculated, coefficients based on ICRP 1990 and 2007 recommendations differed by <3%. The greatest difference, 47%, occurred at 30 MeV.
Copeland, Kyle; Parker, Donald E; Friedberg, Wallace
2010-12-01
Conversion coefficients were calculated for fluence-to-absorbed dose, fluence-to-equivalent dose, fluence-to-effective dose and fluence-to-gray equivalent, for isotropic exposure of an adult male and an adult female to helions ((3)He(2+)) in the energy range of 10 MeV to 1 TeV (0.01-1000 GeV). Calculations were performed using Monte Carlo transport code MCNPX 2.7.C and BodyBuilder™ 1.3 anthropomorphic phantoms modified to allow calculation of effective dose using tissues and tissue weighting factors from either the 1990 or 2007 recommendations of the International Commission on Radiological Protection (ICRP), and gray equivalent to selected tissues as recommended by the National Council on Radiation Protection and Measurements. At 15 of the 19 energies for which coefficients for effective dose were calculated, coefficients based on ICRP 2007 and 1990 recommendations differed by less than 2%. The greatest difference, 62%, occurred at 100 MeV.
NASA Astrophysics Data System (ADS)
Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.
2014-08-01
A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.
NASA Astrophysics Data System (ADS)
Aygun, Bünyamin; Korkut, Turgay; Karabulut, Abdulhalik
2016-05-01
Despite the possibility of depletion of fossil fuels increasing energy needs the use of radiation tends to increase. Recently the security-focused debate about planned nuclear power plants still continues. The objective of this thesis is to prevent the radiation spread from nuclear reactors into the environment. In order to do this, we produced higher performanced of new shielding materials which are high radiation holders in reactors operation. Some additives used in new shielding materials; some of iron (Fe), rhenium (Re), nickel (Ni), chromium (Cr), boron (B), copper (Cu), tungsten (W), tantalum (Ta), boron carbide (B4C). The results of this experiments indicated that these materials are good shields against gamma and neutrons. The powder metallurgy technique was used to produce new shielding materials. CERN - FLUKA Geant4 Monte Carlo simulation code and WinXCom were used for determination of the percentages of high temperature resistant and high-level fast neutron and gamma shielding materials participated components. Super alloys was produced and then the experimental fast neutron dose equivalent measurements and gamma radiation absorpsion of the new shielding materials were carried out. The produced products to be used safely reactors not only in nuclear medicine, in the treatment room, for the storage of nuclear waste, nuclear research laboratories, against cosmic radiation in space vehicles and has the qualities.
MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT
J. S. HENDRICK; G. W. MCKINNEY; L. J. COX
2000-01-01
The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.
MCNP{trademark} Monte Carlo: A precis of MCNP
Adams, K.J.
1996-06-01
MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Monte Carlo modelling of positron transport in real world applications
NASA Astrophysics Data System (ADS)
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
Monte Carlo calculation of patient organ doses from computed tomography.
Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi
2014-01-01
In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc
Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.
2004-12-01
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.
2014-03-27
of the vacuum-filled aluminum cylinder which forms a part of the DD108 accelerator head. The isotropic source is centered therein. The scintillator...radiation transport codes, as the excerpt below explains. 26 The shield chosen for the study was an iron box with liners of various thicknesses of...Model Design Within MCNP6, the scintillator crystal was modeled as a 4x4 mm 2 right circular cylinder (RCC) suspended in vacuum inside a 14x16 mm 2
1974-07-31
Phase Shift for Values of k From 0 to 3 . . . . . . . . . .. *. . . 24 2.2 Values of w Used for Integration of Neutron -Width Distributions with One...associated with multi-group codes, which use flux -averaged cross sections based on assumed flux distributions which may or may not be appropriate. By use of...providing the output is in the specified format. SAM-F then calculates and provides an edit of the desired neutron fluxes and flux -functionals. in addition
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Self-learning Monte Carlo method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
Adiabatic optimization versus diffusion Monte Carlo methods
NASA Astrophysics Data System (ADS)
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Gentile, N A; Kalos, M H; Brunner, T A
2005-03-22
Domain decomposed Monte Carlo codes, like other domain-decomposed codes, are difficult to debug. Domain decomposition is prone to error, and interactions between the domain decomposition code and the rest of the algorithm often produces subtle bugs. These bugs are particularly difficult to find in a Monte Carlo algorithm, in which the results have statistical noise. Variations in the results due to statistical noise can mask errors when comparing the results to other simulations or analytic results. If a code can get the same result on one domain as on many, debugging the whole code is easier. This reproducibility property is also desirable when comparing results done on different numbers of processors and domains. We describe how reproducibility, to machine precision, is obtained on different numbers of domains in an Implicit Monte Carlo photonics code.
Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Monte Carlo inversion of seismic data
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Monte Carlo study of a Cyberknife stereotactic radiosurgery system
Araki, Fujio
2006-08-15
This study investigated small-field dosimetry for a Cyberknife stereotactic radiosurgery system using Monte Carlo simulations. The EGSnrc/BEAMnrc Monte Carlo code was used to simulate the Cyberknife treatment head, and the DOSXYZnrc code was implemented to calculate central axis depth-dose curves, off-axis dose profiles, and relative output factors for various circular collimator sizes of 5 to 60 mm. Water-to-air stopping power ratios necessary for clinical reference dosimetry of the Cyberknife system were also evaluated by Monte Carlo simulations. Additionally, a beam quality conversion factor, k{sub Q}, for the Cyberknife system was evaluated for cylindrical ion chambers with different wall material. The accuracy of the simulated beam was validated by agreement within 2% between the Monte Carlo calculated and measured central axis depth-dose curves and off-axis dose profiles. The calculated output factors were compared with those measured by a diode detector and an ion chamber in water. The diode output factors agreed within 1% with the calculated values down to a 10 mm collimator. The output factors with the ion chamber decreased rapidly for collimators below 20 mm. These results were confirmed by the comparison to those from Monte Carlo methods with voxel sizes and materials corresponding to both detectors. It was demonstrated that the discrepancy in the 5 and 7.5 mm collimators for the diode detector is due to the water nonequivalence of the silicon material, and the dose fall-off for the ion chamber is due to its large active volume against collimators below 20 mm. The calculated stopping power ratios of the 60 mm collimator from the Cyberknife system (without a flattening filter) agreed within 0.2% with those of a 10x10 cm{sup 2} field from a conventional linear accelerator with a heavy flattening filter and the incident electron energy, 6 MeV. The difference in the stopping power ratios between 5 and 60 mm collimators was within 0.5% at a 10 cm depth in
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
Oleynik, D. S.
2015-12-15
A new version of the tally module of the MCU software package is developed in which the approach for taking directly into account the uncertainty in initial data is implemented that is recommended by the international standard on estimating the uncertainty in results of measuring (ISO 13005). The new module makes it possible to evaluate the effect of uncertainty in initial data (caused by technological tolerances in fabrication of structural members of the core) on neutronic characteristics of the reactor. The developed software is adapted to parallel computing with the use of multiprocessor computers, which significantly reduces the computation time: the parallelization coefficient is almost equal to 1. Testing is performed by examples of solving the problem on criticality for the Godiva benchmark experiment and also for the infinite lattice of fuel assemblies of the VVER-440, VVER-1000, and VVER-1200. The results of calculations of the uncertainty in neutronic characteristics (effective multiplication factor, fission reaction rate), which is caused by uncertainties in initial data due to technological tolerances, are compared (in the first case) to the published results obtained using the precision MCNP5 code and (in the second case) to those obtained by means of the RADAR engineering program. A good agreement of results is achieved for all cases.
NASA Astrophysics Data System (ADS)
Oleynik, D. S.
2015-12-01
A new version of the tally module of the MCU software package is developed in which the approach for taking directly into account the uncertainty in initial data is implemented that is recommended by the international standard on estimating the uncertainty in results of measuring (ISO 13005). The new module makes it possible to evaluate the effect of uncertainty in initial data (caused by technological tolerances in fabrication of structural members of the core) on neutronic characteristics of the reactor. The developed software is adapted to parallel computing with the use of multiprocessor computers, which significantly reduces the computation time: the parallelization coefficient is almost equal to 1. Testing is performed by examples of solving the problem on criticality for the Godiva benchmark experiment and also for the infinite lattice of fuel assemblies of the VVER-440, VVER-1000, and VVER-1200. The results of calculations of the uncertainty in neutronic characteristics (effective multiplication factor, fission reaction rate), which is caused by uncertainties in initial data due to technological tolerances, are compared (in the first case) to the published results obtained using the precision MCNP5 code and (in the second case) to those obtained by means of the RADAR engineering program. A good agreement of results is achieved for all cases.
kmos: A lattice kinetic Monte Carlo framework
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
A comparison of Monte Carlo generators
Golan, Tomasz
2015-05-15
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.
Monte Carlo studies of uranium calorimetry
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Search and Rescue Monte Carlo Simulation.
1985-03-01
confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
Monte Carlo studies of ARA detector optimization
NASA Astrophysics Data System (ADS)
Stockham, Jessica
2013-04-01
The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
Parallel implementation of a Monte Carlo molecular stimulation program
Carvalho; Gomes; Cordeiro
2000-05-01
Molecular simulation methods such as molecular dynamics and Monte Carlo are fundamental for the theoretical calculation of macroscopic and microscopic properties of chemical and biochemical systems. These methods often rely on heavy computations, and one sometimes feels the need to run them in powerful massively parallel machines. For moderate problem sizes, however, a not so powerful and less expensive solution based on a network of workstations may be quite satisfactory. In the present work, the strategy adopted in the development of a parallel version is outlined, using the message passing model, of a molecular simulation code to be used in a network of workstations. This parallel code is the adaptation of an older sequential code using the Metropolis Monte Carlo method. In this case, the message passing interface was used as the interprocess communications library, although the code could be easily adapted for other message passing systems such as the parallel virtual machine. For simple systems it is shown that speedups of 2 can be achieved for four processes with this cheap solution. For bigger and more complex simulated systems, even better speedups might be obtained, which indicates that the presented approach is appropriate for the efficient use of a network of workstations in parallel processing.
Quantum Monte Carlo Endstation for Petascale Computing
David Ceperley
2011-03-02
CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular
Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission
NASA Technical Reports Server (NTRS)
Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
NASA Technical Reports Server (NTRS)
Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Comparison of Monte Carlo methods for criticality benchmarks: Pointwise compared to multigroup
Choi, J.S.; Alesso, P.H.; Pearson, J.S. )
1989-01-01
Transport codes use multigroup cross sections where neutrons are divided into broad energy groups, and the monoenergetic equation is solved for each group with a group-averaged cross section. Monte Carlo codes differ in that they allow the use of the most basic pointwise cross-section data directly in a calculation. Most of the first Monte Carlo codes were not able to utilize this feature, however, because of the memory limitations of early computers and the lack of pointwise cross-section data. Consequently, codes written in 1970s, such as KENO-IV and MORSE-C, were adapted to use multigroup cross-section sets similar to those used in the S{sub n} transport codes. With advances in computer memory capacities and the availability of pointwise cross-section sets, new Monte Carlo codes employing pointwise cross-section libraries, such as the Los Alamos National Laboratory code MCNP and the Lawrence Livermore National Laboratory (LLNL) code COG were developed for criticality, as well as radiation transport calculations. To compare pointwise and multigroup Monte Carlo methods for criticality benchmark calculations, this paper presents and evaluated the results from the KENO-IV, MORSE-C, MCNP, and COG codes. The critical experiments selected for benchmarking include LLNL fast metal systems and low-enriched uranium moderated and reflected systems.
Monte Carlo track structure for radiation biology and space applications
NASA Technical Reports Server (NTRS)
Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.
2001-01-01
Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Sternat, M.; Nichols, T.
2011-06-09
Reactor burnup or depletion codes are used thoroughly in the fields of nuclear forensics and nuclear safeguards. Two common codes include MONTEBURNS and MCNPX/CINDER. These are Monte-Carlo depletion routines utilizing MCNP for neutron transport calculations and either ORIGEN or CINDER for burnup calculations. Uncertainties exist in the MCNP steps, but this information is not passed to the depletion calculations or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 150 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. The distributions for each code are a statistical benchmark and comparisons made. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of the results appear to not. Statistical analyses are performed using the {chi}{sup 2} test against a normal distribution for the k-effective results and several isotopes including {sup 134}Cs, {sup 137}Cs, {sup 235}U, {sup 238}U, {sup 237}Np, {sup 238}Pu, {sup 239}Pu, and {sup 240}Pu.
Ahmad, I.; Back, B.B.; Betts, R.R.
1995-08-01
An essential component in the assessment of the significance of the results from APEX is a demonstrated understanding of the acceptance and response of the apparatus. This requires detailed simulations which can be compared to the results of various source and in-beam measurements. These simulations were carried out using the computer codes EGS and GEANT, both specifically designed for this purpose. As far as is possible, all details of the geometry of APEX were included. We compared the results of these simulations with measurements using electron conversion sources, positron sources and pair sources. The overall agreement is quite acceptable and some of the details are still being worked on. The simulation codes were also used to compare the results of measurements of in-beam positron and conversion electrons with expectations based on known physics or other methods. Again, satisfactory agreement is achieved. We are currently working on the simulation of various pair-producing scenarios such as the decay of a neutral object in the mass range 1.5-2.0 MeV and also the emission of internal pairs from nuclear transitions in the colliding ions. These results are essential input to the final results from APEX on cross section limits for various, previously proposed, sharp-line producing scenarios.
Monte Carlo Particle Transport: Algorithm and Performance Overview
Gentile, N; Procassini, R; Scott, H
2005-06-02
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.
Quantum Monte Carlo applied to solids
Shulenburger, Luke; Mattsson, Thomas R.
2013-12-01
We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.
A practical Monte Carlo MU verification tool for IMRT quality assurance.
Fan, J; Li, J; Chen, L; Stathakis, S; Luo, W; Du Plessis, F; Xiong, W; Yang, J; Ma, C-M
2006-05-21
Quality assurance (QA) for intensity-modulated radiation therapy (IMRT) treatment planning and beam delivery, using ionization chamber measurements and film dosimetry in a phantom, is time consuming. The Monte Carlo method is the most accurate method for radiotherapy dose calculation. However, a major drawback of Monte Carlo dose calculation as currently implemented is its slow speed. The goal of this work is to bring the efficiency of Monte Carlo into a practical range by developing a fast Monte Carlo monitor unit (MU) verification tool for IMRT. A special estimator for dose at a point called the point detector has been used in this research. The point detector uses the next event estimation (NEE) method to calculate the photon energy fluence at a point of interest and then converts it to collision kerma by the mass energy absorption coefficient assuming the presence of transient charged particle equilibrium. The MU verification tool has been validated by comparing the calculation results with measurements. It can be used for both patient dose verification and phantom QA calculation. The dynamic leaf-sequence log file is used to rebuild the actual MLC leaf sequence in order to predict the dose actually received by the patient. Dose calculations for 20 patient plans have been performed using the point detector method. Results were compared with direct Monte Carlo simulations using EGS4/MCSIM, which is a well-benchmarked Monte Carlo code. The results between the point detector and MCSIM agreed to within 2%. A factor of 20 speedup can be achieved with the point detector method compared with direct Monte Carlo simulations.
A practical Monte Carlo MU verification tool for IMRT quality assurance
NASA Astrophysics Data System (ADS)
Fan, J.; Li, J.; Chen, L.; Stathakis, S.; Luo, W.; Du Plessis, F.; Xiong, W.; Yang, J.; Ma, C.-M.
2006-05-01
Quality assurance (QA) for intensity-modulated radiation therapy (IMRT) treatment planning and beam delivery, using ionization chamber measurements and film dosimetry in a phantom, is time consuming. The Monte Carlo method is the most accurate method for radiotherapy dose calculation. However, a major drawback of Monte Carlo dose calculation as currently implemented is its slow speed. The goal of this work is to bring the efficiency of Monte Carlo into a practical range by developing a fast Monte Carlo monitor unit (MU) verification tool for IMRT. A special estimator for dose at a point called the point detector has been used in this research. The point detector uses the next event estimation (NEE) method to calculate the photon energy fluence at a point of interest and then converts it to collision kerma by the mass energy absorption coefficient assuming the presence of transient charged particle equilibrium. The MU verification tool has been validated by comparing the calculation results with measurements. It can be used for both patient dose verification and phantom QA calculation. The dynamic leaf-sequence log file is used to rebuild the actual MLC leaf sequence in order to predict the dose actually received by the patient. Dose calculations for 20 patient plans have been performed using the point detector method. Results were compared with direct Monte Carlo simulations using EGS4/MCSIM, which is a well-benchmarked Monte Carlo code. The results between the point detector and MCSIM agreed to within 2%. A factor of 20 speedup can be achieved with the point detector method compared with direct Monte Carlo simulations.
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Recovering intrinsic fluorescence by Monte Carlo modeling.
Müller, Manfred; Hendriks, Benno H W
2013-02-01
We present a novel way to recover intrinsic fluorescence in turbid media based on Monte Carlo generated look-up tables and making use of a diffuse reflectance measurement taken at the same location. The method has been validated on various phantoms with known intrinsic fluorescence and is benchmarked against photon-migration methods. This new method combines more flexibility in the probe design with fast reconstruction and showed similar reconstruction accuracy as found in other reconstruction methods.
Monte Carlo approach to Estrada index
NASA Astrophysics Data System (ADS)
Gutman, Ivan; Radenković, Slavko; Graovac, Ante; Plavšić, Dejan
2007-09-01
Let G be a graph on n vertices, and let λ1, λ2, …, λn be its eigenvalues. The Estrada index of G is a recently introduced molecular structure descriptor, defined as EE=∑i=1ne. Using a Monte Carlo approach, and treating the graph eigenvalues as random variables, we deduce approximate expressions for EE, in terms of the number of vertices and number of edges, of very high accuracy.
Accelerated Monte Carlo by Embedded Cluster Dynamics
NASA Astrophysics Data System (ADS)
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Islam, M. Anwarul; Akramuzzaman, M. M.; Zakaria, G. A.
2012-01-01
Manufacturing of miniaturized high activity 192Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60Co HDR sources have been made available with identical dimensions to those of 192Ir sources. 60Co sources have an advantage of longer half life while comparing with 192Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60Co HDR and new microSelectron 192Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSlectron HDR 192Ir v2 and new BEBIG 60Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698×10-8 ± 0.55% U Bq-1 and 3.039×10-7 ± 0.41% U Bq-1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h-1U-1 and 1.097±0.12% cGy h-1U-1, respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using appropriate cut
Inclusion of coherence in Monte Carlo models for simulation of x-ray phase contrast imaging.
Cipiccia, Silvia; Vittoria, Fabio A; Weikum, Maria; Olivo, Alessandro; Jaroszynski, Dino A
2014-09-22
Interest in phase contrast imaging methods based on electromagnetic wave coherence has increased significantly recently, particularly at X-ray energies. This is giving rise to a demand for effective simulation methods. Coherent imaging approaches are usually based on wave optics, which require significant computational resources, particularly for producing 2D images. Monte Carlo (MC) methods, used to track individual particles/photons for particle physics, are not considered appropriate for describing coherence effects. Previous preliminary work has evaluated the possibility of incorporating coherence in Monte Carlo codes. However, in this paper, we present the implementation of refraction in a model that is based on time of flight calculations and the Huygens-Fresnel principle, which allow reproducing the formation of phase contrast images in partially and fully coherent experimental conditions. The model is implemented in the FLUKA Monte Carlo code and X-ray phase contrast imaging simulations are compared with experiments and wave optics calculations.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Monte Carlo applications at Hanford Engineering Development Laboratory
Carter, L.L.; Morford, R.J.; Wilcox, A.D.
1980-03-01
Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
Analytical positron range modelling in heterogeneous media for PET Monte Carlo simulation.
Lehnert, Wencke; Gregoire, Marie-Claude; Reilhac, Anthonin; Meikle, Steven R
2011-06-07
Monte Carlo simulation codes that model positron interactions along their tortuous path are expected to be accurate but are usually slow. A simpler and potentially faster approach is to model positron range from analytical annihilation density distributions. The aims of this paper were to efficiently implement and validate such a method, with the addition of medium heterogeneity representing a further challenge. The analytical positron range model was evaluated by comparing annihilation density distributions with those produced by the Monte Carlo simulator GATE and by quantitatively analysing the final reconstructed images of Monte Carlo simulated data. In addition, the influence of positronium formation on positron range and hence on the performance of Monte Carlo simulation was investigated. The results demonstrate that 1D annihilation density distributions for different isotope-media combinations can be fitted with Gaussian functions and hence be described by simple look-up-tables of fitting coefficients. Together with the method developed for simulating positron range in heterogeneous media, this allows for efficient modelling of positron range in Monte Carlo simulation. The level of agreement of the analytical model with GATE depends somewhat on the simulated scanner and the particular research task, but appears to be suitable for lower energy positron emitters, such as (18)F or (11)C. No reliable conclusion about the influence of positronium formation on positron range and simulation accuracy could be drawn.
Monte Carlo simulation of a medical linear accelerator for radiotherapy use.
Serrano, B; Hachem, A; Franchisseur, E; Hérault, J; Marcié, S; Costa, A; Bensadoun, R J; Barthe, J; Gérard, J P
2006-01-01
A Monte Carlo code MCNPX (Monte Carlo N-particle) was used to model a 25 MV photon beam from a PRIMUS (KD2-Siemens) medical linear electron accelerator at the Centre Antoine Lacassagne in Nice. The entire geometry including the accelerator head and the water phantom was simulated to calculate the dose profile and the relative depth-dose distribution. The measurements were done using an ionisation chamber in water for different square field ranges. The first results show that the mean electron beam energy is not 19 MeV as mentioned by Siemens. The adjustment between the Monte Carlo calculated and measured data is obtained when the mean electron beam energy is approximately 15 MeV. These encouraging results will permit to check calculation data given by the treatment planning system, especially for small fields in high gradient heterogeneous zones, typical for intensity modulated radiation therapy technique.
Measuring free energy in spin-lattice models using parallel tempering Monte Carlo
NASA Astrophysics Data System (ADS)
Wang, Wenlong
2015-05-01
An efficient and simple approach of measuring the absolute free energy as a function of temperature for spin lattice models using a two-stage parallel tempering Monte Carlo and the free energy perturbation method is discussed and the results are compared with those of population annealing Monte Carlo using the three-dimensional Edwards-Anderson Ising spin glass model as benchmark tests. This approach requires little modification of regular parallel tempering Monte Carlo codes with also little overhead. Numerical results show that parallel tempering, even though using a much less number of temperatures than population annealing, can nevertheless equally efficiently measure the absolute free energy by simulating each temperature for longer times.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Monte Carlo study of Siemens PRIMUS photoneutron production.
Pena, J; Franco, L; Gómez, F; Iglesias, A; Pardo, J; Pombar, M
2005-12-21
Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components necessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.
Monte Carlo study of Siemens PRIMUS photoneutron production
NASA Astrophysics Data System (ADS)
Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Pardo, J.; Pombar, M.
2005-12-01
Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components neccessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Treatment planning for a small animal using Monte Carlo simulation.
Chow, James C L; Leung, Michael K K
2007-12-01
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Multicanonical Monte Carlo for Simulation of Optical Links
NASA Astrophysics Data System (ADS)
Bononi, Alberto; Rusch, Leslie A.
Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
Monte Carlo simulation and dosimetric verification of radiotherapy beam modifiers
NASA Astrophysics Data System (ADS)
Spezi, E.; Lewis, D. G.; Smith, C. W.
2001-11-01
Monte Carlo simulation of beam modifiers such as physical wedges and compensating filters has been performed with a rectilinear voxel geometry module. A modified version of the EGS4/DOSXYZ code has been developed for this purpose. The new implementations have been validated against the BEAM Monte Carlo code using its standard component modules (CMs) in several geometrical conditions. No significant disagreements were found within the statistical errors of 0.5% for photons and 2% for electrons. The clinical applicability and flexibility of the new version of the code has been assessed through an extensive verification versus dosimetric data. Both Varian multi-leaf collimator (MLC) wedges and standard wedges have been simulated and compared against experiments for 6 MV photon beams and different field sizes. Good agreement was found between calculated and measured depth doses and lateral dose profiles along both wedged and unwedged directions for different depths and focus-to-surface distances. Furthermore, Monte Carlo-generated output factors for both open and wedged fields agreed with linac commissioning beam data within statistical uncertainties of the calculations (<3% at largest depths). Compensating filters of both low-density and high-density materials have also been successfully simulated. As a demonstration, a wax compensating filter with a complex three-dimensional concave and convex geometry has been modelled through a CT scan import. Calculated depth doses and lateral dose profiles for different field sizes agreed well with experiments. The code was used to investigate the performance of a commercial treatment planning system in designing compensators. Dose distributions in a heterogeneous water phantom emulating the head and neck region were calculated with the convolution-superposition method (pencil beam and collapsed cone implementations) and compared against those from the MC code developed herein. The new technique presented in this work is
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Monte Carlo ICRH simulations in fully shaped anisotropic plasmas
Jucker, M.; Graves, J. P.; Cooper, W. A.; Mellet, N.; Brunner, S.
2008-11-01
In order to numerically study the effects of Ion Cyclotron Resonant Heating (ICRH) on the fast particle distribution function in general plasma geometries, three codes have been coupled: VMEC generates a general (2D or 3D) MHD equilibrium including full shaping and pressure anisotropy. This equilibrium is then mapped into Boozer coordinates. The full-wave code LEMan then calculates the power deposition and electromagnetic field strength of a wave field generated by a chosen antenna using a warm model. Finally, the single particle Hamiltonian code VENUS combines the outputs of the two previous codes in order to calculate the evolution of the distribution function. Within VENUS, Monte Carlo operators for Coulomb collisions of the fast particles with the background plasma have been implemented, accounting for pitch angle and energy scattering. Also, ICRH is simulated using Monte Carlo operators on the Doppler shifted resonant layer. The latter operators act in velocity space and induce a change of perpendicular and parallel velocity depending on the electric field strength and the corresponding wave vector. Eventually, the change in the distribution function will then be fed into VMEC for generating a new equilibrium and thus a self-consistent solution can be found. This model is an enhancement of previous studies in that it is able to include full 3D effects such as magnetic ripple, treat the effects of non-zero orbit width consistently and include the generation and effects of pressure anisotropy. Here, first results of coupling the three codes will be shown in 2D tokamak geometries.
Monte Carlo simulation of intercalated carbon nanotubes.
Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter
2007-01-01
Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.
Quantum Monte Carlo for vibrating molecules
Brown, W.R. |
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
A Monte Carlo approach to water management
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2012-04-01
Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Monte Carlo algorithm for free energy calculation.
Bi, Sheng; Tong, Ning-Hua
2015-07-01
We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.
MBR Monte Carlo Simulation in PYTHIA8
NASA Astrophysics Data System (ADS)
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Monte Carlo procedure for protein design
NASA Astrophysics Data System (ADS)
Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik
1998-11-01
A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
Discovering correlated fermions using quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Wagner, Lucas K.; Ceperley, David M.
2016-09-01
It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.
Quantum Monte Carlo calculations for light nuclei
Wiringa, R.B.
1998-08-01
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Introduction to Cluster Monte Carlo Algorithms
NASA Astrophysics Data System (ADS)
Luijten, E.
This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.
Cluster hybrid Monte Carlo simulation algorithms.
Plascak, J A; Ferrenberg, Alan M; Landau, D P
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Cluster hybrid Monte Carlo simulation algorithms
NASA Astrophysics Data System (ADS)
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Monte Carlo simulation for the transport beamline
NASA Astrophysics Data System (ADS)
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
NASA Astrophysics Data System (ADS)
Bauge, E.
2015-01-01
The "Full model" evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the "Full model" evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Monte Carlo simulation of a new gamma ray telescope
NASA Technical Reports Server (NTRS)
Simone, J.; Oneill, T.; Tumer, O. T.; Zych, A. D.
1985-01-01
A new Monte Carlo code has been written to simulate the response of the new University of California double scatter gamma ray telescope. This package of modular software routines, written in VAX FORTRAN 77 simulates the detection of 0.1 to 35 MeV gamma rays. The new telescope is flown from high altitude balloons to measure medium energy gamma radiation from astronomical sources. This paper presents (1) the basic physics methods in the code, and (2) the predicted response functions of the telescope. Gamma ray processes include Compton scattering, pair production and photoelectric absorption in plastic scintillator, NaI(Tl) and aluminum. Electron transport processes include ionization energy loss, multiple scattering, production of bremsstrahlung photons and positron annihilation.
Monte Carlo simulations for optimization of neutron shielding concrete
NASA Astrophysics Data System (ADS)
Piotrowski, Tomasz; Tefelski, Dariusz B.; Polański, Aleksander; Skubalski, Janusz
2012-06-01
Concrete is one of the main materials used for gamma and neutron shielding. While in case of gamma rays an increase in density is usually efficient enough, protection against neutrons is more complex. The aim of this paper is to show the possibility of using the Monte Carlo codes for evaluation and optimization of concrete mix to reach better neutron shielding. Two codes (MCNPX and SPOT — written by authors) were used to simulate neutron transport through a wall made of different concretes. It is showed that concrete of higher compressive strength attenuates neutrons more effectively. The advantage of heavyweight concrete (with barite aggregate), usually used for gamma shielding, over the ordinary concrete was not so clear. Neutron shielding depends on many factors e.g. neutron energy, barrier thickness and atomic composition. All this makes a proper design of concrete as a very important issue for nuclear power plant safety assurance.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Goluoglu, Sedat; Bekar, Kursat B; Wiarda, Dorothea
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Scaling GDL for Multi-cores to Process Planck HFI Beams Monte Carlo on HPC
NASA Astrophysics Data System (ADS)
Coulais, A.; Schellens, M.; Duvert, G.; Park, J.; Arabas, S.; Erard, S.; Roudier, G.; Hivon, E.; Mottet, S.; Laurent, B.; Pinter, M.; Kasradze, N.; Ayad, M.
2014-05-01
After reviewing the majors progress done in GDL -now in 0.9.4- on performance and plotting capabilities since ADASS XXI paper (Coulais et al. 2012), we detail how a large code for Planck HFI beams Monte Carlo was successfully transposed from IDL to GDL on HPC.
Teacher's Corner: Using SAS for Monte Carlo Simulation Research in SEM
ERIC Educational Resources Information Center
Fan, Xitao; Fan, Xiaotao
2005-01-01
This article illustrates the use of the SAS system for Monte Carlo simulation work in structural equation modeling (SEM). Data generation procedures for both multivariate normal and nonnormal conditions are discussed, and relevant SAS codes for implementing these procedures are presented. A hypothetical example is presented in which Monte Carlo…
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Monte Carlo simulation of ICRF discharge initiation in ITER
NASA Astrophysics Data System (ADS)
Tripský, M.; Wauters, T.; Lyssoivan, A.; Křivská, A.; Louche, F.; Van Schoor, M.; Noterdaeme, J.-M.
2015-12-01
Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC). The here presented simulations aim at ensuring that the ITER ICRH&CD system can be safely employed for ICWC and at finding optimal parameters to initiate the plasma. The 1D Monte Carlo code RFdinity1D3V was developed to simulate ICRF discharge initiation. The code traces the electron motion along one toroidal magnetic field line, accelerated by the RF field in front of the ICRF antenna. Electron collisions in the calculations are handled by a Monte Carlo procedure taking into account their energies and the related electron collision cross sections for collisions with H2, H2+ and H+. The code also includes Coulomb collisions between electrons and ions (e - e, e - H2+ , e - H+). We study the electron multiplication rate as a function of the RF discharge parameters (i) antenna input power (0.1-5MW), and (ii) the neutral pressure (H2) for two antenna phasing (monopole [0000]-phasing and small dipole [0π0π]-phasing). Furthermore, we investigate the electron multiplication rate dependency on the distance from the antenna straps. This radial dependency results from the decreasing electric amplitude and field smoothening with increasing distance from the antenna straps. The numerical plasma breakdown definition used in the code corresponds to the moment when a critical electron density nec for the low hybrid resonance (ω = ωLHR) is reached. This numerical definition was previously found in qualitative agreement with experimental breakdown times obtained from the literature and from experiments on the ASDEX Upgrade and TEXTOR.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
BOOK REVIEW: Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine
NASA Astrophysics Data System (ADS)
Coulot, J.
2003-08-01
H Zaidi and G Sgouros (eds) Bristol: Institute of Physics Publishing (2002) £70.00, ISBN: 0750308168 Monte Carlo techniques are involved in many applications in medical physics, and the field of nuclear medicine has seen a great development in the past ten years due to their wider use. Thus, it is of great interest to look at the state of the art in this domain, when improving computer performances allow one to obtain improved results in a dramatically reduced time. The goal of this book is to make, in 15 chapters, an exhaustive review of the use of Monte Carlo techniques in nuclear medicine, also giving key features which are not necessary directly related to the Monte Carlo method, but mandatory for its practical application. As the book deals with `therapeutic' nuclear medicine, it focuses on internal dosimetry. After a general introduction on Monte Carlo techniques and their applications in nuclear medicine (dosimetry, imaging and radiation protection), the authors give an overview of internal dosimetry methods (formalism, mathematical phantoms, quantities of interest). Then, some of the more widely used Monte Carlo codes are described, as well as some treatment planning softwares. Some original techniques are also mentioned, such as dosimetry for boron neutron capture synovectomy. It is generally well written, clearly presented, and very well documented. Each chapter gives an overview of each subject, and it is up to the reader to investigate it further using the extensive bibliography provided. Each topic is discussed from a practical point of view, which is of great help for non-experienced readers. For instance, the chapter about mathematical aspects of Monte Carlo particle transport is very clear and helps one to apprehend the philosophy of the method, which is often a difficulty with a more theoretical approach. Each chapter is put in the general (clinical) context, and this allows the reader to keep in mind the intrinsic limitation of each technique
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
Monte Carlo modeling of spatial coherence: free-space diffraction
Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.
2008-01-01
We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; ...
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Geometrical Monte Carlo simulation of atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yuksel, Demet; Yuksel, Heba
2013-09-01
Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; ...
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
CosmoMC: Cosmological MonteCarlo
NASA Astrophysics Data System (ADS)
Lewis, Antony; Bridle, Sarah
2011-06-01
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo methods for nuclear physics
NASA Astrophysics Data System (ADS)
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-07-01
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Quantum Monte Carlo for atoms and molecules
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate forms of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon
Monte Carlo calculation of the radiation field at aircraft altitudes.
Roesler, S; Heinrich, W; Schraube, H
2002-01-01
Energy spectra of secondary cosmic rays are calculated for aircraft altitudes and a discrete set of solar modulation parameters and rigidity cut-off values covering all possible conditions. The calculations are based on the Monte Carlo code FLUKA and on the most recent information on the interstellar cosmic ray flux including a detailed model of solar modulation. Results are compared to a large variety of experimental data obtained on the ground and aboard aircraft and balloons, such as neutron, proton, and muon spectra and yields of charged particles. Furthermore, particle fluence is converted into ambient dose equivalent and effective dose and the dependence of these quantities on height above sea level, solar modulation, and geographical location is studied. Finally, calculated dose equivalent is compared to results of comprehensive measurements performed aboard aircraft.
Monte carlo calculations of light scattering from clouds.
Plass, G N; Kattawar, G W
1968-03-01
The scattering of visible light by clouds is calculated from an efficient Monte Carlo code which follows the multiple scattered path of the photon. The single scattering function is obtained from the Mie theory by integration over a particle size distribution appropriate for cumulus clouds at 0.7-micro wavelength. The photons are followed through a sufficient number of collisions and reflections from the lower surface (which may have any desired albedo) until they make a negligible contribution to the intensity. Various variance reduction techniques are used to improve the statistics. The cloud albedo and the mean optical path of the transmitted and reflected photons are given as a function of the solar zenith angle, optical thickness, and surface albedo. The numerous small angle scatterings of the photon in the direction of the incident beam are followed accurately and produce a greater penetration into the cloud than is obtained with a more isotropic and less realistic phase function.
Monte Carlo Neutrino Transport in Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Richers, Sherwood; Dolence, Joshua; Ott, Christian
2017-01-01
Neutrino interactions dominate the energetics of core-collapse supernovae (CCSNe) and determine the composition of the matter ejected from CCSNe and gamma-ray bursts (GRBs). Three dimensional (3D) CCSN and neutron star merger simulations are rapidly improving, but still suffer from approximate treatments of neutrino transport that cripple their reliability and realism. I use my relativistic time-independent Monte Carlo neutrino transport code SEDONU to evaluate the effectiveness of leakage, moment, and discrete ordinate schemes in the context of core-collapse supernovae. I also developed a relativistic extension to the Random Walk approximation that greatly accelerates convergence in diffusive regimes, making full-domain simulations possible. Blue Waters Graduate Fellowship.
Monte Carlo Simulations of Background Spectra in Integral Imager Detectors
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.
1998-01-01
Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.
Monte Carlo Modeling of High-Energy Film Radiography
Miller, A.C., Jr.; Cochran, J.L.; Lamberti, V.E.
2003-03-28
High-energy film radiography methods, adapted in the past to performing specific tasks, must now meet increasing demands to identify defects and perform critical measurements in a wide variety of manufacturing processes. Although film provides unequaled resolution for most components and assemblies, image quality must be enhanced with much more detailed information to identify problems and qualify features of interest inside manufactured items. The work described is concerned with improving current 9 MeV nondestructive practice by optimizing the important parameters involved in film radiography using computational methods. In order to follow important scattering effects produced by electrons, the Monte Carlo N-Particle (MCNP) transport code was used with advanced, highly parallel computer systems. The work has provided a more detailed understanding of latent image formation at high X-ray energies, and suggests that improvements can be made in our ability to identify defects and to obtain much more detail in images of fine features.
OBJECT KINETIC MONTE CARLO SIMULATIONS OF RADIATION DAMAGE IN TUNGSTEN
Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.
2015-04-16
We used our recently developed lattice-based object kinetic Monte Carlo code; KSOME [1] to carryout simulations of radiation damage in bulk tungsten at temperatures of 300, and 2050 K for various dose rates. Displacement cascades generated from molecular dynamics (MD) simulations for PKA energies at 60, 75 and 100 keV provided residual point defect distributions. It was found that the number density of vacancies in the simulation box does not change with dose rate while the number density of vacancy clusters slightly decreases with dose rate indicating that bigger clusters are formed at larger dose rates. At 300 K, although the average vacancy cluster size increases slightly, the vast majority of vacancies exist as mono-vacancies. At 2050 K no accumulation of defects was observed during irradiation over a wide range of dose rates for all PKA energies studied in this work.