Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Monte Carlo Ion Transport Analysis Code.
2009-04-15
Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.
Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration
NASA Technical Reports Server (NTRS)
Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; Gadoli, E.; Gazelli, M. V.; LeBourgeois, M.; Lee, K. T.; Mayes, B.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L. S.; Rancati, T.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannocchio, D.; Smirnov, G.
2004-01-01
NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Nuclear reactions in Monte Carlo codes.
Ferrari, A; Sala, P R
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Optix: A Monte Carlo scintillation light transport code
NASA Astrophysics Data System (ADS)
Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi
2014-02-01
The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.
Monte Carlo simulation of turnover processes in the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
A Monte Carlo model for the gardening of the lunar surface by meteoritic impact is described, and some representative results are given. The model accounts with reasonable success for a wide variety of properties of the regolith. The smoothness of the lunar surface on a scale of centimeters to meters, which was not reproduced in an earlier version of the model, is accounted for by the preferential downward movement of low-energy secondary particles. The time scale for filling lunar grooves and craters by this process is also derived. The experimental bombardment ages (about 4 x 10 to the 8th yr for spallogenic rare gases, about 10 to the 9th yr for neutron capture Gd and Sm isotopes) are not reproduced by the model. The explanation is not obvious.
Domain decomposition methods for a parallel Monte Carlo transport code
Alme, H J; Rodrigue, G H; Zimmerman, G B
1999-01-27
Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.
Monte Carlo Model Insights into the Lunar Sodium Exosphere
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Killen, R. M.; Sarantos, M.
2012-01-01
Sodium in the lunar exosphere is released from the lunar regolith by several mechanisms. These mechanisms include photon stimulated desorption (PSD), impact vaporization, electron stimulated desorption, and ion sputtering. Usually, PSD dominates; however, transient events can temporarily enhance other release mechanisms so that they are dominant. Examples of transient events include meteor showers and coronal mass ejections. The interaction between sodium and the regolith is important in determining the density and spatial distribution of sodium in the lunar exosphere. The temperature at which sodium sticks to the surface is one factor. In addition, the amount of thermal accommodation during the encounter between the sodium atom and the surface affects the exospheric distribution. Finally, the fraction of particles that are stuck when the surface is cold that are rereleased when the surface warms up also affects the exospheric density. In [1], we showed the "ambient" sodium exosphere from Monte Carlo modeling with a fixed source rate and fixed surface interaction parameters. We compared the enhancement when a CME passes the Moon to the ambient conditions. Here, we compare model results to data in order to determine the source rates and surface interaction parameters that provide the best fit of the model to the data.
ALEPH2 - A general purpose Monte Carlo depletion code
Stankovskiy, A.; Van Den Eynde, G.; Baeten, P.; Trakas, C.; Demy, P. M.; Villatte, L.
2012-07-01
The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)
Recent advances in the Mercury Monte Carlo particle transport code
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
Monte Carlo Nucleon Meson Transport Code System.
2000-11-17
Version 00 NMTC/JAERI97 is an upgraded version of the code system NMTC/JAERI, which was developed in 1982 at JAERI and is based on the CCC-161/NMTC code system. NMTC/JAERI97 simulates high energy nuclear reactions and nucleon-meson transport processes.
A semianalytic Monte Carlo code for modelling LIDAR measurements
NASA Astrophysics Data System (ADS)
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
Geometric Templates for Improved Tracking Performance in Monte Carlo Codes
NASA Astrophysics Data System (ADS)
Nease, Brian R.; Millman, David L.; Griesheimer, David P.; Gill, Daniel F.
2014-06-01
One of the most fundamental parts of a Monte Carlo code is its geometry kernel. This kernel not only affects particle tracking (i.e., run-time performance), but also shapes how users will input models and collect results for later analyses. A new framework based on geometric templates is proposed that optimizes performance (in terms of tracking speed and memory usage) and simplifies user input for large scale models. While some aspects of this approach currently exist in different Monte Carlo codes, the optimization aspect has not been investigated or applied. If Monte Carlo codes are to be realistically used for full core analysis and design, this type of optimization will be necessary. This paper describes the new approach and the implementation of two template types in MC21: a repeated ellipse template and a box template. Several different models are tested to highlight the performance gains that can be achieved using these templates. Though the exact gains are naturally problem dependent, results show that runtime and memory usage can be significantly reduced when using templates, even as problems reach realistic model sizes.
Improved diffusion coefficients generated from Monte Carlo codes
Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.
2013-07-01
Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
A Post-Monte-Carlo Sensitivity Analysis Code
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Monte Carlo Simulation of Lunar Neutron Spectrum and Orbital Neutron Detection
NASA Astrophysics Data System (ADS)
SU, J.; Khachatryan, R.; Sadgeev, R.; Usikov, D.; Milikh, G. M.; Chin, G.
2012-12-01
The early detection of lunar neutrons produced by precipitation of galactic cosmic ray (GCR) particles in the upper layer of lunar soil goes back to Apollo Moon landing (Apollo 17) epoch. Since then it has been developed into its own type of remote sensing (Lunar Prospector/1998-1999; LRO/2009-till now), which is especially sensitive for singling out the information on presence of hydrogen (e.g. frozen water inside permanently shadowed craters) from neutron based cosmo-chemistry data. The final interpretation technique relies on comprehensive Monte Carlo simulation of neutron production by GCR and subsequent leakage from the Moon. Until now such extensive simulation was carried mostly with the use of MCNPX code [1], [2]. Here we report on the use of alternative MC code GEANT4, developed at CERN and offered as the open source software [3]. We believe that cross-comparison and inter-calibration of both codes will add more weight to the importance, versatility and reliability of Monte Carlo approach for neutron detection based planetary remote sensing. As a first step we compare basic results for neutron leakage from lunar soil (for several modeled elemental compositions). Then GEANT4 code was used to study the modification of neutron leakage in presence of top layer of dry and wet regolith. These data were applied to analysis of physical nature of SNRs (Suppressed Neutron Regions) found by LEND in polar areas of the Moon [4]. [1] Lawrence D.J. et al., (2006) JGR, 111, doi:10.1029/2005JE002637. [2] Mitrofanov I.G. et al. (2008) Astrobiology, 8, doi:10.1089/ast.207.0158. [3] Agostinelli S. et al., (2003) Nuclear Instr. Method in Phys. Res., 506A, 250-303. [4] Mitrofanov I.G. et al. (2010) Science, 330, doi:10.1126/science.1185696.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, i.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
Simulating oblique incident irradiation using the BEAMnrc Monte Carlo code.
Downes, P; Spezi, E
2009-04-01
A new source for the simulation of oblique incident irradiation has been developed for the BEAMnrc Monte Carlo code. In this work, we describe a method for the simulation of any component that is rotated at some angle relative to the central axis of the modelled radiation unit. The performance of the new BEAMnrc source was validated against experimental measurements. The comparison with ion chamber data showed very good agreement between experiments and calculation for a number of oblique irradiation angles ranging from 0 degrees to 30 degrees . The routine was also cross-validated, in geometrically equivalent conditions, against a different radiation source available in the DOSXYZnrc code. The test showed excellent consistency between the two routines. The new radiation source can be particularly useful for the Monte Carlo simulation of radiation units in which the radiation beam is tilted with respect to the unit's central axis. To highlight this, a modern cone-beam CT unit is modelled using this new source and validated against measurement.
Applications guide to the MORSE Monte Carlo code
Cramer, S.N.
1985-08-01
A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed.
Radiographic Capabilities of the MERCURY Monte Carlo Code
McKinley, M S; von Wittenau, A
2008-04-07
MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. Recently, a radiographic capability has been added. MERCURY can create a source of diagnostic, virtual particles that are aimed at pixels in an image tally. This new feature is compared to the radiography code, HADES, for verification and timing. Comparisons for accuracy were made using the French Test Object and for timing were made by tracking through an unstructured mesh. In addition, self consistency tests were run in MERCURY for the British Test Object and scattering test problem. MERCURY and HADES were found to agree to the precision of the input data. HADES appears to run around eight times faster than the MERCURY in the timing study. Profiling the MERCURY code has turned up several differences in the algorithms which account for this. These differences will be addressed in a future release of MERCURY.
Monte-Carlo Continuous Energy Burnup Code System.
2007-08-31
Version 00 MCB is a Monte Carlo Continuous Energy Burnup Code for a general-purpose use to calculate a nuclide density time evolution with burnup or decay. It includes eigenvalue calculations of critical and subcritical systems as well as neutron transport calculations in fixed source mode or k-code mode to obtain reaction rates and energy deposition that are necessary for burnup calculations. The MCB-1C patch file and data packages as distributed by the NEADB are verymore » well organized and are being made available through RSICC as received. The RSICC package includes the MCB-1C patch and MCB data libraries. Installation of MCB requires MCNP4C source code and utility programs, which are not included in this MCB distribution. They were provided with the now obsolete CCC-700/MCNP-4C package.« less
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
Monte Carlo code criticality benchmark comparisons for waste packaging
Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.
1992-07-01
COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock & Wilcox Co. (B&W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.
Monte Carlo code criticality benchmark comparisons for waste packaging
Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.
1992-07-01
COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock Wilcox Co. (B W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.
abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code
NASA Astrophysics Data System (ADS)
Akeret, Joel
2015-04-01
abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols.
An Automated, Multi-Step Monte Carlo Burnup Code System.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2,more » the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.« less
Monte Carlo code for high spatial resolution ocean color simulations.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C
2010-09-10
A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols. PMID:20830183
An Automated, Multi-Step Monte Carlo Burnup Code System.
TRELLUE, HOLLY R.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.
NASA Technical Reports Server (NTRS)
Firstenberg, H.
1971-01-01
The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.
EGS4. Electron-Gamma Shower Monte Carlo Code
Nelson, W.R.
1989-06-01
EGS4 (Electron-Gamma Shower) is a general purpose Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies above a few keV up to several TeV. The radiation transport of electrons or photons can be simulated in any element, compound, or mixture. The following physics processes can be taken into account: bremsstrahlung production (excluding the Elwert correction at low energies), positron annihilation in flight and at rest (the annihilation quanta are followed to completion), Moliere multiple scattering (i.e., Coulomb scattering from nuclei), Moller and Bhabha scattering, continuous energy loss applied to charged particle tracks between discrete interactions, pair production, Compton scattering, coherent (Rayleigh) scattering, and photoelectric effect. EGS4 allows for the implementation of importance sampling and other variance reduction techniques (e.g., leading particle biasing, splitting, path length biasing, Russian roulette, etc.). PEGS4 is a preprocessor for EGS4. It constructs piecewise-linear fits over a large number of energy intervals of the cross section and branching ratio data and contains options to plot any of the physical quantities used by EGS4, as well as to compare sampled distributions produced by user code with theoretical spectra.
The Monte Carlo code MCSHAPE: Main features and recent developments
NASA Astrophysics Data System (ADS)
Scot, Viviana; Fernandez, Jorge E.
2015-06-01
MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
Generation of SFR few-group constants using the Monte Carlo code Serpent
Fridman, E.; Rachamin, R.; Shwageraus, E.
2013-07-01
In this study, the Serpent Monte Carlo code was used as a tool for preparation of homogenized few-group cross sections for the nodal diffusion analysis of Sodium cooled Fast Reactor (SFR) cores. Few-group constants for two reference SFR cores were generated by Serpent and then employed by nodal diffusion code DYN3D in 2D full core calculations. The DYN3D results were verified against the references full core Serpent Monte Carlo solutions. A good agreement between the reference Monte Carlo and nodal diffusion results was observed demonstrating the feasibility of using Serpent for generation of few-group constants for the deterministic SFR analysis. (authors)
Monte Carlo Capabilities of the SCALE Code System
NASA Astrophysics Data System (ADS)
Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.
2014-06-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; et al
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
Multiparticle Monte Carlo Code System for Shielding and Criticality Use.
2015-06-01
Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computing Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02). New
Multiparticle Monte Carlo Code System for Shielding and Criticality Use.
2015-06-01
Version 00 COG is a modern, full-featured Monte Carlo radiation transport code that provides accurate answers to complex shielding, criticality, and activation problems.COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computingmore » Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://cog.llnl.gov. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. A lattice feature simplifies the specification of regular arrays of parts. Parallel processing under MPI is supported for multi-CPU systems. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, pathlength stretching, point detectors, scattered direction biasing, and forced collisions. Criticality For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems COG can solve coupled problems involving neutrons, photons, and electrons. COG 11.1 is an updated version of COG11.1 BETA 2 (RSICC C00777MNYCP02
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.
Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.
Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond
Review of fast monte carlo codes for dose calculation in radiation therapy treatment planning.
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the 'fast' Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Application of Monte Carlo codes to neutron dosimetry
Prevo, C.T.
1982-06-15
In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique.
Development and testing of a Monte Carlo code system for analysis of ionization chamber responses
Johnson, J.O.; Gabriel, T.A.
1986-01-01
To predict the perturbation of interactions between radiation and material by the presence of a detector, a differential Monte Carlo computer code system entitled MICAP was developed and tested. This code system determines the neutron, photon, and total response of an ionization chamber to mixed field radiation environments. To demonstrate the ability of MICAP in calculating an ionization chamber response function, a comparison was made to 05S, an established Monte Carlo code extensively used to accurately calibrate liquid organic scintillators. Both code systems modeled an organic scintillator with a parallel beam of monoenergetic neutrons incident on the scintillator. (LEW)
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
NASA Astrophysics Data System (ADS)
Murray, J.; SU, J. J.; Sagdeev, R.; Chin, G.
2014-12-01
Introduction:Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the composition of the lunar soil [1-3]. Orbital measurements of lunar neutron flux have been made by the Lunar Prospector Neutron Spectrometer (LPNS)[4] of the Lunar Prospector mission and the Lunar Exploration Neutron Detector (LEND)[5] of the Lunar Reconnaissance Orbiter mission. While both are cylindrical helium-3 detectors, LEND's SETN (Sensor EpiThermal Neutrons) instrument is shorter, with double the helium-3 pressure than that of LPNS. The two instruments therefore have different angular sensitivities and neutron detection efficiencies. Furthermore, the Lunar Prospector's spin-stabilized design makes its detection efficiency latitude-dependent, while the SETN instrument faces permanently downward toward the lunar surface. We use the GEANT4 Monte Carlo simulation code[6] to investigate the leakage lunar neutron energy spectrum, which follows a power law of the form E-0.9 in the epithermal energy range, and the signals detected by LPNS and SETN in the LP and LRO mission epochs, respectively. Using the lunar neutron flux reconstructed for LPNS epoch, we calculate the signal that would have been observed by SETN at that time. The subsequent deviation from the actual signal observed during the LEND epoch is due to the significantly higher intensity of Galactic Cosmic Rays during the anomalous Solar Minimum of 2009-2010. References: [1] W. C. Feldman, et al., (1998) Science Vol. 281 no. 5382 pp. 1496-1500. [2] Gasnault, O., et al.,(2000) J. Geophys. Res., 105(E2), 4263-4271. [3] Little, R. C., et al. (2003), J. Geophys. Res., 108(E5), 5046. [4]W. C. Feldman, et al., (1999) Nucl. Inst. And Method in Phys. Res. A 422, [5] M. L. Litvak, et al., (2012) J.Geophys. Res. 117, E00H32 [6] J. Allison, et al, (2006) IEEE Trans. on Nucl Sci, Vol 53, No 1.
3D Direct Simulation Monte Carlo Code Which Solves for Geometrics
1998-01-13
Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.
FREYA-a new Monte Carlo code for improved modeling of fission chains
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
T.J. Urbatsch; T.M. Evans
2006-02-15
We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo
NASA Astrophysics Data System (ADS)
Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor
2005-06-01
New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.
Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System
NASA Astrophysics Data System (ADS)
Karim, Julia Abdul
2008-05-01
The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.
Progress and status of the OpenMC Monte Carlo code
Romano, P. K.; Herman, B. R.; Horelik, N. E.; Forget, B.; Smith, K.; Siegel, A. R.
2013-07-01
The present work describes the latest advances and progress in the development of the OpenMC Monte Carlo code, an open-source code originating from the Massachusetts Institute of Technology. First, an overview of the development workflow of OpenMC is given. Various enhancements to the code such as real-time XML input validation, state points, plotting, OpenMP threading, and coarse mesh finite difference acceleration are described. (authors)
Large scale cratering of the lunar highlands - Some Monte Carlo model considerations
NASA Technical Reports Server (NTRS)
Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.
1976-01-01
In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.
A General-Purpose Monte Carlo Gamma-Ray Transport Code System for Minicomputers.
1981-08-27
Version 00 The OGRE code system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two codes which treat slab geometry. OGRE-P1 computes the dose on one side of a slab for a source on the other side, and HOTONE computes energy deposition in addition. The source may be monodirectional, isotropic, or cosine distributed.
Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
Kirk, B.L.; West, J.T.
1984-06-01
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
Perfetti, Christopher M; Martin, William R; Rearden, Bradley T; Williams, Mark L
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Dose conversion coefficients for ICRP110 voxel phantom in the Geant4 Monte Carlo code
NASA Astrophysics Data System (ADS)
Martins, M. C.; Cordeiro, T. P. V.; Silva, A. X.; Souza-Santos, D.; Queiroz-Filho, P. P.; Hunt, J. G.
2014-02-01
The reference adult male voxel phantom recommended by International Commission on Radiological Protection no. 110 was implemented in the Geant4 Monte Carlo code. Geant4 was used to calculate Dose Conversion Coefficients (DCCs) expressed as dose deposited in organs per air kerma for photons, electrons and neutrons in the Annals of the ICRP. In this work the AP and PA irradiation geometries of the ICRP male phantom were simulated for the purpose of benchmarking the Geant4 code. Monoenergetic photons were simulated between 15 keV and 10 MeV and the results were compared with ICRP 110, the VMC Monte Carlo code and the literature data available, presenting a good agreement.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
CHIBANI, OMAR
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twice the source particle range.
MCNP: a general Monte Carlo code for neutron and photon transport
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O'Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Monte Carlo Code System for Electron (Positron) Dose Kernel Calculations.
1999-05-12
Version 00 KERNEL performs dose kernel calculations for an electron (positron) isotropic point source in an infinite homogeneous medium. First, the auxiliary code PRELIM is used to prepare cross section data for the considered medium. Then the KERNEL code simulates the transport of electrons and bremsstrahlung photons through the medium until all particles reach their cutoff energies. The deposited energy is scored in concentric spherical shells at a radial distance ranging from zero to twicemore » the source particle range.« less
Buck, R M; Hall, J M
1999-06-01
COG is a major multiparticle simulation code in the LLNL Monte Carlo radiation transport toolkit. It was designed to solve deep-penetration radiation shielding problems in arbitrarily complex 3D geometries, involving coupled transport of photons, neutrons, and electrons. COG was written to provide as much accuracy as the underlying cross-sections will allow, and has a number of variance-reduction features to speed computations. Recently COG has been applied to the simulation of high- resolution radiographs of complex objects and the evaluation of contraband detection schemes. In this paper we will give a brief description of the capabilities of the COG transport code and show several examples of neutron and gamma-ray imaging simulations. Keywords: Monte Carlo, radiation transport, simulated radiography, nonintrusive inspection, neutron imaging.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code
NASA Astrophysics Data System (ADS)
Yu, Deliang; Yan, Longwen; Zhong, Guangwu; Lu, Jie; Yi, Ping
2007-04-01
A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport. The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas. The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code, which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks. The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.
The Serpent Monte Carlo Code: Status, Development and Applications in 2013
NASA Astrophysics Data System (ADS)
Leppänen, Jaakko; Pusa, Maria; Viitanen, Tuomas; Valtavirta, Ville; Kaltiaisenaho, Toni
2014-06-01
The Serpent Monte Carlo reactor physics burnup calculation code has been developed at VTT Technical Research Centre of Finland since 2004, and is currently used in 100 universities and research organizations around the world. This paper presents the brief history of the project, together with the currently available methods and capabilities and plans for future work. Typical user applications are introduced in the form of a summary review on Serpent-related publications over the past few years.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Monte Carlo code for neutron scattering instrumentation design and analysis
Daemen, L.; Fitzsimmons, M.; Hjelm, R.; Olah, G.; Roberts, J.; Seeger, P.; Smith, G.; Thelliez, T.
1996-09-01
This is the final report of a one-year, Laboratory-Directed Research and Development (LDRD) at the Los Alamos National Laboratory (LANL). The development of next generation, accelerator based neutron sources calls for the design of new instruments for neutron scattering studies of materials. It will be necessary, in the near future, to evaluate accurately and rapidly the performance of new and traditional neutron instruments at short- and long-pulse spallation neutron sources, as well as continuous sources. We have developed a code that is a design tool to assist the instrument designer model new or existing instruments, test their performance, and optimize their most important features.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
Tippayakul, C.; Ivanov, K.; Misu, S.
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)
Verification and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A
2004-12-09
Verification and Validation (V&V) is a critical phase in the development cycle of any scientific code. The aim of the V&V process is to determine whether or not the code fulfills and complies with the requirements that were defined prior to the start of the development process. While code V&V can take many forms, this paper concentrates on validation of the results obtained from a modern code against those produced by a validated, legacy code. In particular, the neutron transport capabilities of the modern Monte Carlo code MERCURY are validated against those in the legacy Monte Carlo code TART. The results from each code are compared for a series of basic transport and criticality calculations which are designed to check a variety of code modules. These include the definition of the problem geometry, particle tracking, collisional kinematics, sampling of secondary particle distributions, and nuclear data. The metrics that form the basis for comparison of the codes include both integral quantities and particle spectra. The use of integral results, such as eigenvalues obtained from criticality calculations, is shown to be necessary, but not sufficient, for a comprehensive validation of the code. This process has uncovered problems in both the transport code and the nuclear data processing codes which have since been rectified.
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.
1999-07-01
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
NASA Astrophysics Data System (ADS)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Dosimetric characterization of an 192Ir brachytherapy source with the Monte Carlo code PENELOPE.
Casado, Francisco Javier; García-Pareja, Salvador; Cenizo, Elena; Mateo, Beatriz; Bodineau, Coral; Galán, Pedro
2010-01-01
Monte Carlo calculations are highly spread and settled practice to calculate brachytherapy sources dosimetric parameters. In this study, recommendations of the AAPM TG-43U1 report have been followed to characterize the Varisource VS2000 (192)Ir high dose rate source, provided by Varian Oncology Systems. In order to obtain dosimetric parameters for this source, Monte Carlo calculations with PENELOPE code have been carried out. TG-43 formalism parameters have been presented, i.e., air kerma strength, dose rate constant, radial dose function and anisotropy function. Besides, a 2D Cartesian coordinates dose rate in water table has been calculated. These quantities are compared to this source reference data, finding results in good agreement with them. The data in the present study complement published data in the next aspects: (i) TG-43U1 recommendations are followed regarding to phantom ambient conditions and to uncertainty analysis, including statistical (type A) and systematic (type B) contributions; (ii) PENELOPE code is benchmarked for this source; (iii) Monte Carlo calculation methodology differs from that usually published in the way to estimate absorbed dose, leaving out the track-length estimator; (iv) the results of the present work comply with the most recent AAPM and ESTRO physics committee recommendations about Monte Carlo techniques, in regards to dose rate uncertainty values and established differences between our results and reference data. The results stated in this paper provide a complete parameter collection, which can be used for dosimetric calculations as well as a means of comparison with other datasets from this source.
Movable geometry and eigenvalue search capability in the MC21 Monte Carlo code
Gill, D. F.; Nease, B. R.; Griesheimer, D. P.
2013-07-01
A description of a robust and flexible movable geometry implementation in the Monte Carlo code MC21 is described along with a search algorithm that can be used in conjunction with the movable geometry capability to perform eigenvalue searches based on the position of some geometric component. The natural use of the combined movement and search capability is searching to critical through variation of control rod (or control drum) position. The movable geometry discussion provides the mathematical framework for moving surfaces in the MC21 combinatorial solid geometry description. A discussion of the interface between the movable geometry system and the user is also described, particularly the ability to create a hierarchy of movable groups. Combined with the hierarchical geometry description in MC21 the movable group framework provides a very powerful system for inline geometry modification. The eigenvalue search algorithm implemented in MC21 is also described. The foundations of this algorithm are a regula falsi search though several considerations are made in an effort to increase the efficiency of the algorithm for use with Monte Carlo. Specifically, criteria are developed to determine after each batch whether the Monte Carlo calculation should be continued, the search iteration can be rejected, or the search iteration has converged. These criteria seek to minimize the amount of time spent per iteration. Results for the regula falsi method are shown, illustrating that the method as implemented is indeed convergent and that the optimizations made ultimately reduce the total computational expense. (authors)
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Development and Testing of a Chemical Sputtering Model for the Monte Carlo Impurity (MCI) Code
NASA Astrophysics Data System (ADS)
Loh, Y. S.; Evans, T. E.; West, W. P.; Finkenthal, D. F.; Fenstermacher, M. E.; Porter, G. D.
1997-11-01
Fluid code calculations indicate that chemical sputtering may be an important process in high density, radiatively detached, tokamak divertor operations. A chemical sputtering model has been designed and installed into the DIII--D Monte Carlo Impurity (MCI) transport code. We will discuss how the model was constructed and the sources of atomic data used. Comparisons between chemical and physical sputtering yields will be presented for differing plasma conditions. Preliminary comparisons with DIII--D experimental data and a discussion of the benchmarking process will be presented.
Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
A Monte Carlo model for the gardening of the lunar regolith
NASA Technical Reports Server (NTRS)
Arnold, J. R.
1975-01-01
The processes of movement and turnover of the lunar regolith are described by a Monte Carlo model. The movement of material by the direct cratering process is the dominant mode, but slumping is also included for angles exceeding the static angle of repose. Using a group of interrelated computer programs, a large number of properties are calculated, including topography, formation of layers, depth of the disturbed layer, nuclear-track distributions, and cosmogenic nuclides. In the most complex program, the history of a 36-point square array is followed for times up to 400 million years. The histories generated are complex and exhibit great variety. Because a crater covers much less area than its ejecta blanket, there is a tendency for the height change at a test point to exhibit periods of slow accumulation followed by sudden excavation. In general, the agreement with experiment and observation seems good, but two areas of disagreement stand out. First, the calculated surface is rougher than that observed. Second, the observed bombardment ages, of the order 400 million are shorter than expected (by perhaps a factor of 5).
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
OpenMC: A State-of-the-Art Monte Carlo Code for Research and Development
NASA Astrophysics Data System (ADS)
Romano, Paul K.; Horelik, Nicholas E.; Herman, Bryan R.; Nelson, Adam G.; Forget, Benoit; Smith, Kord
2014-06-01
This paper gives an overview of OpenMC, an open source Monte Carlo particle transport code recently developed at the Massachusetts Institute of Technology. OpenMC uses continuous-energy cross sections and a constructive solid geometry representation, enabling high-fidelity modeling of nuclear reactors and other systems. Modern, portable input/output file formats are used in OpenMC: XML for input, and HDF5 for output. High performance parallel algorithms in OpenMC have demonstrated near-linear scaling to over 100,000 processors on modern supercomputers. Other topics discussed in this paper include plotting, CMFD acceleration, variance reduction, eigenvalue calculations, and software development processes.
Coincidence summing corrections for volume samples using the PENELOPE/penEasy Monte Carlo code.
Vargas, A; Camp, A; Serrano, I; Duch, M A
2014-05-01
The coincidence summing correction factors estimated with penEasy, a steering program for the Monte Carlo simulation code PENELOPE, and with penEasy-eXtended, an in-house modified version of penEasy, are presented and discussed for (152)Eu and (134)Cs in volume sources. The geometries and experimental data were obtained from an intercomparison study organized by the International Committee for Radionuclide Metrology (ICRM). A significant improvement in the results calculated with PENELOPE/penEasy was obtained when X-rays are included in the (152)Eu simulations. PMID:24326316
New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O'Brien, M J; Taylor, J M
2007-03-08
The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.
Development of a dynamic simulation mode in Serpent 2 Monte Carlo code
Leppaenen, J.
2013-07-01
This paper presents a dynamic neutron transport mode, currently being implemented in the Serpent 2 Monte Carlo code for the purpose of simulating short reactivity transients with temperature feedback. The transport routine is introduced and validated by comparison to MCNP5 calculations. The method is also tested in combination with an internal temperature feedback module, which forms the inner part of a multi-physics coupling scheme in Serpent 2. The demo case for the coupled calculation is a reactivity-initiated accident (RIA) in PWR fuel. (authors)
Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
NASA Astrophysics Data System (ADS)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
Development and validation of ALEPH2 Monte Carlo burn-up code
Van Den Eynde, G.; Stankovskiy, A.; Fiorito, L.; Broustaut, M.
2013-07-01
The ALEPH2 Monte Carlo depletion code has two principal features that make it a flexible and powerful tool for reactor analysis. First of all, its comprehensive nuclear data library ensures the consistency between steady-state Monte Carlo and deterministic depletion modules. It covers neutron and proton induced reactions, neutron and proton fission product yields, spontaneous fission product yields, radioactive decay data and total recoverable energies per fission. Secondly, ALEPH2 uses an advanced numerical solver for the first order ordinary differential equations describing the isotope balances, namely a Radau IIA implicit Runge-Kutta method. The versatility of the code allows using it for time behavior simulation of various systems ranging from single pin model to full-scale reactor model. The code is extensively used for the neutronics design of the MYRRHA research fast spectrum facility which will operate in both critical and sub-critical modes. The code has been validated on the decay heat data from JOYO experimental fast reactor. (authors)
NASA Astrophysics Data System (ADS)
Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.
2015-11-01
In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics
Battistoni, Giuseppe; Broggi, Francesco; Brugger, Markus; Campanella, Mauro; Carboni, Massimo; Empl, Anton; Fasso, Alberto; Gadioli, Ettore; Cerutti, Francesco; Ferrari, Alfredo; Ferrari, Anna; Lantz, Matthias; Mairani, Andrea; Margiotta, M.; Morone, Christina; Muraro, Silvia; Parodi, Katerina; Patera, Vincenzo; Pelliccioni, Maurizio; Pinsky, Lawrence; Ranft, Johannes; /Siegen U. /CERN /Seibersdorf, Reaktorzentrum /INFN, Milan /Milan U. /SLAC /INFN, Legnaro /INFN, Bologna /Bologna U. /CERN /HITS, Heidelberg /CERN /CERN /Frascati /CERN /CERN /CERN /CERN /NASA, Houston
2012-04-17
FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such topics as accelerator related applications.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code
Trumbull, T. H.
2013-07-01
With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)
Spread-out Bragg peak and monitor units calculation with the Monte Carlo Code MCNPX
Herault, J.; Iborra, N.; Serrano, B.; Chauvel, P.
2007-02-15
The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation.
Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code
NASA Technical Reports Server (NTRS)
Yamakov, Vesselin I.
2016-01-01
This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Uncertainty analysis in environmental radioactivity measurements using the Monte Carlo code MCNP5
NASA Astrophysics Data System (ADS)
Gallardo, S.; Querol, A.; Ortiz, J.; Ródenas, J.; Verdú, G.; Villanueva, J. F.
2015-11-01
High Purity Germanium (HPGe) detectors are widely used for environmental radioactivity measurements due to their excellent energy resolution. Monte Carlo (MC) codes are a useful tool to complement experimental measurements in calibration procedures at the laboratory. However, the efficiency curve of the detector can vary due to uncertainties associated with measurements. These uncertainties can be classified into some categories: geometrical parameters of the measurement (distance source-detector, volume of the source), properties of the radiation source (radionuclide activity, branching ratio), and detector characteristics (Ge dead layer, active volume, end cap thickness). The Monte Carlo simulation can be also affected by other kind of uncertainties mainly related to cross sections and to the calculation itself. Normally, all these uncertainties are not well known and it required a deep analysis to determine their effect on the detector efficiency. In this work, the Noether-Wilks formula is used to carry out the uncertainty analysis. A Probability Density Function (PDF) is assigned to each variable involved in the sampling process. The size of the sampling is determined from the characteristics of the tolerance intervals by applying the Noether-Wilks formula. Results of the analysis transform the efficiency curve into a region of possible values into the tolerance intervals. Results show a good agreement between experimental measurements and simulations for two different matrices (water and sand).
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
O'Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation
NASA Astrophysics Data System (ADS)
Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2016-03-01
It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications
Chibani, O; Ma, C; Eldib, A
2014-06-01
Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.
Cullen, D.E.
1994-08-01
The computer code WALKMAN performs electron single collision elastic scattering Monte Carlo calculations in spherical or planar geometry. It is intended as a research tool to obtain results that can be compared to the results of condensed history calculations. This code is designed to be self documenting, in the sense that the latest documentation is included as comment lines at the beginning of the code. Printed documentation, such as this document, is periodically published and consists mostly of a copy of the comment lines from the code. The user should be aware that the comment lines within the code are continually updated to reflect the most recent status of the code and these comments should always be considered to be the most recent documentation for the code and may supersede published documentation, such as this document. Therefore, the user is advised to always read the documentation within the actual code. The remainder of this report consists of example results and a listing of the documentation which appears at the beginning of the code.
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.
DAMOCLES - A Monte Carlo Code for the Design Optimization of the OMNIS Detectors
NASA Astrophysics Data System (ADS)
Zach, Juergen J.; Stj. Murphy, Alexander; Marriott, Darin; Boyd, Richard N.
2000-10-01
In support of the proposal for the Observatory for Multivflavor NeutrIno Oscillations, a Monte Carlo code has been developed to realistically simulate the operation of the planned detectors. OMNIS is based on the detection of neutrons emitted from nuclei excited by neutrinos from a supernova burst interacting with Pb- or Fe-nuclei. This is accomplished using Gd-loaded liquid scintillator. Results for the optimum configuration for the modules with respect to cost-efficiency are presented. The results show that the amount of data to be processed by a software trigger can be reduced to the <10kHz region and that a neutron, once produced in the detector, can be detected and identified as originating from a neutrino spallation event with an efficiency of above 30the effects of radioactive impurities in the detector materials used.
TRIPOLI-4®, CEA, EDF and AREVA Reference Monte Carlo Code
NASA Astrophysics Data System (ADS)
2014-06-01
This paper presents an overview of TRIPOLI-4®, the fourth generation of the 3D continuous-energy Monte Carlo code developed by the Service d'Etudes des Réacteurs et de Mathématiques Appliquées (SERMA) at CEA Saclay. The paper surveys the generic features: programming language, parallel operation, tracked particles, nuclear data, geometry, simulation modes, standard variance reduction techniques, sources, tracking and collision algorithms, tallies, sensitivity studies. Moreover, specific and recent features are also detailed: Doppler broadening of the elastic scattering kernel, neutron and photon material irradiation, advanced variance reduction techniques, Green's functions, cycle correlation correction, nuclear data management and depletion capabilities. The productivity tools (T4G, SALOME TRIPOLI, T4RootTools), the Verification & Validation process and the distribution and licensing policy are finally presented.
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-03-01
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.
Space applications of the MITS electron-photon Monte Carlo transport code system
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as
Sunny, E. E.; Martin, W. R.
2013-07-01
Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Giuseppe Palmiotti
2015-05-01
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Verification of SMART Neutronics Design Methodology by the MCNAP Monte Carlo Code
Jong Sung Chung; Kyung Jin Shim; Chang Hyo Kim; Chungchan Lee; Sung Quun Zee
2000-11-12
SMART is a small advanced integral pressurized water reactor (PWR) of 330 MW(thermal) designed for both electricity generation and seawater desalinization. The CASMO-3/MASTER nuclear analysis system, a design-basis of Korean PWR plants, has been employed for the SMART core nuclear design and analysis because the fuel assembly (FA) characteristics and reactor operating conditions in temperature and pressure are similar to those of PWR plants. However, the SMART FAs are highly poisoned with more than 20 Al{sub 2}O{sub 3}-B{sub 4}C plus additional Gd{sub 2}O{sub 3}/UO{sub 2} BPRs each FA. The reactor is operated with control rods inserted. Therefore, the flux and power distribution may become more distorted than those of commercial PWR plants. In addition, SMART should produce power from room temperature to hot-power operating condition because it employs nuclear heating from room temperature. This demands reliable predictions of core criticality, shutdown margin, control rod worth, power distributions, and reactivity coefficients at both room temperature and hot operating condition, yet no such data are available to verify the CASMO-3/MASTER (hereafter MASTER) code system. In the absence of experimental verification data for the SMART neutronics design, the Monte Carlo depletion analysis program MCNAP is adopted as near-term alternatives for qualifying MASTER neutronics design calculations. The MCNAP is a personal computer-based continuous energy Monte Carlo neutronics analysis program written in C++ language. We established its qualification by presenting its prediction accuracy on measurements of Venus critical facilities and core neutronics analysis of a PWR plant in operation, and depletion characteristics of integral burnable absorber FAs of the current PWR. Here, we present a comparison of MASTER and MCNAP neutronics design calculations for SMART and establish the qualification of the MASTER system.
A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator.
Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein
2014-01-01
Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm(2)× 10 cm(2) filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model.
2013-06-24
Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check authors homepage for related information: http
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann
2011-07-01
There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.
Simulation for the Production of Technetium-99m Using Monte Carlo N-Particle Transport Code
NASA Astrophysics Data System (ADS)
Kaita, Courtney; Gentile, Charles; Zelenty, Jennifer
2010-11-01
The Monte Carlo N-Particle Transport Code (MCNP) is employed to simulate the radioisotope production process that leads to the creation of Technetium-99m (Tc-99m). Tc-99m is a common metastable nuclear isomer used in nuclear medicine tests and is produced from the gamma decay of Molybdenum-99 (Mo-99). Mo-99 is commonly produced from the fission of Uranium-235, a complicated process which is only performed at a limited number of facilities. Due to the age of these facilities, coupled with the critical importance of a steady flow of Mo-99, new methods of generating Mo-99 are being investigated. Current experiments demonstrate promising alternatives, one of which consists of the neutron activation of Molybdenum-98 (Mo-98), a naturally occurring element found in nature. Mo-98 has a small cross section (.13 barns), so investigations are also aimed at overcoming this natural obstacle for producing Tc-99m. The neutron activated Mo-98 becomes Mo-99 and subsequently decays into radioactive Tc-99m. The MCNP code is being used to examine the interactions between the particles in each of these situations, thus determining a theoretical threshold to maximize the reaction's efficiency. The simulation results will be applied to ongoing experiments at the PPPL, where the empirical data will be compared to predictions from the MCNP code.
Development of Monte Carlo code for coincidence prompt gamma-ray neutron activation analysis
NASA Astrophysics Data System (ADS)
Han, Xiaogang
Prompt Gamma-Ray Neutron Activation Analysis (PGNAA) offers a non-destructive, relatively rapid on-line method for determination of elemental composition of bulk and other samples. However, PGNAA has an inherently large background. These backgrounds are primarily due to the presence of the neutron excitation source. It also includes neutron activation of the detector and the prompt gamma rays from the structure materials of PGNAA devices. These large backgrounds limit the sensitivity and accuracy of PGNAA. Since most of the prompt gamma rays from the same element are emitted in coincidence, a possible approach for further improvement is to change the traditional PGNAA measurement technique and introduce the gamma-gamma coincidence technique. It is well known that the coincidence techniques can eliminate most of the interference backgrounds and improve the signal-to-noise ratio. A new Monte Carlo code, CEARCPG has been developed at CEAR to simulate gamma-gamma coincidence spectra in PGNAA experiment. Compared to the other existing Monte Carlo code CEARPGA I and CEARPGA II, a new algorithm of sampling the prompt gamma rays produced from neutron capture reaction and neutron inelastic scattering reaction, is developed in this work. All the prompt gamma rays are taken into account by using this new algorithm. Before this work, the commonly used method is to interpolate the prompt gamma rays from the pre-calculated gamma-ray table. This technique works fine for the single spectrum. However it limits the capability to simulate the coincidence spectrum. The new algorithm samples the prompt gamma rays from the nucleus excitation scheme. The primary nuclear data library used to sample the prompt gamma rays comes from ENSDF library. Three cases are simulated and the simulated results are benchmarked with experiments. The first case is the prototype for ETI PGNAA application. This case is designed to check the capability of CEARCPG for single spectrum simulation. The second
SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy
Rinaldi, I
2014-06-01
Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European
NASA Astrophysics Data System (ADS)
Brogan, John
Understanding the dosimetry for high-energy, heavy ions (HZE), especially within living systems, is complex and requires the use of both experimental and computational methods. Tissue-equivalent proportional counters (TEPCs) have been used experimentally to measure energy deposition in volumes similar in dimension to a mammalian cell. As these experiments begin to include a wider range of ions and energies, considerations to cost, time, and radiation protection are necessary and may limit the extent of these studies. Multiple Monte Carlo computational codes have been created to remediate this problem and serve as a mode of verification for pervious experimental methods. One such code, Relativistic-Ion Tracks (RITRACKS), is currently being developed at the NASA Johnson Space center. RITRACKS was designed to describe patterns of ionizations responsible for DNA damage on the molecular scale (nanometers). This study extends RITRACKS version 3.07 into the microdosimetric scale (microns), and compares computational results to previous experimental TEPC data. Energy deposition measurements for 1000 MeV nucleon-1 Fe ions in a 1 micron spherical target were compared. Different settings within RITRACKS were tested to verify their effects on dose to a target and the resulting energy deposition frequency distribution. The results were then compared to the TEPC data.
Carrazana González, J; Cornejo Díaz, N; Jurado Vargas, M
2012-05-01
We studied the applicability of the Monte Carlo code DETEFF for the efficiency calibration of detectors for in situ gamma-ray spectrometry determinations of ground deposition activity levels. For this purpose, the code DETEFF was applied to a study case, and the calculated (137)Cs activity deposition levels at four sites were compared with published values obtained both by soil sampling and by in situ measurements. The (137)Cs ground deposition levels obtained with DETEFF were found to be equivalent to the results of the study case within the uncertainties involved. The code DETEFF could thus be used for the efficiency calibration of in situ gamma-ray spectrometry for the determination of ground deposition activity using the uniform slab model. It has the advantage of requiring far less simulation time than general Monte Carlo codes adapted for efficiency computation, which is essential for in situ gamma-ray spectrometry where the measurement configuration yields low detection efficiency. PMID:22336296
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
VALDEZ, GREG D.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects onemore » of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering,more » photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.« less
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.
MOCRA: a Monte Carlo code for the simulation of radiative transfer in the atmosphere.
Premuda, Margherita; Palazzi, Elisa; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Giovanelli, Giorgio
2012-03-26
This paper describes the radiative transfer model (RTM) MOCRA (MOnte Carlo Radiance Analysis), developed in the frame of DOAS (Differential Optical Absorption Spectroscopy) to correctly interpret remote sensing measurements of trace gas amounts in the atmosphere through the calculation of the Air Mass Factor. Besides the DOAS-related quantities, the MOCRA code yields: 1- the atmospheric transmittance in the vertical and sun directions, 2- the direct and global irradiance, 3- the single- and multiple- scattered radiance for a detector with assigned position, line of sight and field of view. Sample calculations of the main radiometric quantities calculated with MOCRA are presented and compared with the output of another RTM (MODTRAN4). A further comparison is presented between the NO2 slant column densities (SCDs) measured with DOAS at Evora (Portugal) and the ones simulated with MOCRA. Both comparisons (MOCRA-MODTRAN4 and MOCRA-observations) gave more than satisfactory results, and overall make MOCRA a versatile tool for atmospheric radiative transfer simulations and interpretation of remote sensing measurements.
MOCRA: a Monte Carlo code for the simulation of radiative transfer in the atmosphere.
Premuda, Margherita; Palazzi, Elisa; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Giovanelli, Giorgio
2012-03-26
This paper describes the radiative transfer model (RTM) MOCRA (MOnte Carlo Radiance Analysis), developed in the frame of DOAS (Differential Optical Absorption Spectroscopy) to correctly interpret remote sensing measurements of trace gas amounts in the atmosphere through the calculation of the Air Mass Factor. Besides the DOAS-related quantities, the MOCRA code yields: 1- the atmospheric transmittance in the vertical and sun directions, 2- the direct and global irradiance, 3- the single- and multiple- scattered radiance for a detector with assigned position, line of sight and field of view. Sample calculations of the main radiometric quantities calculated with MOCRA are presented and compared with the output of another RTM (MODTRAN4). A further comparison is presented between the NO2 slant column densities (SCDs) measured with DOAS at Evora (Portugal) and the ones simulated with MOCRA. Both comparisons (MOCRA-MODTRAN4 and MOCRA-observations) gave more than satisfactory results, and overall make MOCRA a versatile tool for atmospheric radiative transfer simulations and interpretation of remote sensing measurements. PMID:22453470
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Lone, M. A.; Wong, P. Y.; Costen, Robert C.
1994-01-01
A baryon transport code (BRYNTRN) has previously been verified using available Monte Carlo results for a solar-flare spectrum as the reference. Excellent results were obtained, but the comparisons were limited to the available data on dose and dose equivalent for moderate penetration studies that involve minor contributions from secondary neutrons. To further verify the code, the secondary energy spectra of protons and neutrons are calculated using BRYNTRN and LAHET (Los Alamos High-Energy Transport code, which is a Monte Carlo code). These calculations are compared for three locations within a water slab exposed to the February 1956 solar-proton spectrum. Reasonable agreement was obtained when various considerations related to the calculational techniques and their limitations were taken into account. Although the Monte Carlo results are preliminary, it appears that the neutron albedo, which is not currently treated in BRYNTRN, might be a cause for the large discrepancy seen at small penetration depths. It also appears that the nonelastic neutron production cross sections in BRYNTRN may underestimate the number of neutrons produced in proton collisions with energies below 200 MeV. The notion that the poor energy resolution in BRYNTRN may cause a large truncation error in neutron elastic scattering requires further study.
Kramer, R; Vieira, J W; Lima, F R A; Fuelle, D
2002-07-01
Organ or tissue equivalent dose, the most important quantity in radiation protection, cannot be measured directly. Therefore it became common practice to calculate the quantity of interest with Monte Carlo methods applied to so-called human phantoms, which are virtual representations of the human body. The Monte Carlo computer code determines conversion coefficients, which are ratios between organ or tissue equivalent dose and measurable quantities. Conversion coefficients have been published by the ICRP (Report No. 74) for various types of radiation, energies and fields, which have been calculated, among others, with the mathematical phantoms ADAM and EVA. Since then progress of image processing, and of clock speed and memory capacity of computers made it possible to create so-called voxel phantoms, which are a far more realistic representation of the human body. Voxel (Volume pixel) phantoms are built from segmented CT and/or MRI images of real persons. A complete set of such images can be joined to a 3-dimensional representation of the human body, which can be linked to a Monte Carlo code allowing for particle transport calculations. A modified version of the VOX_TISS8 human voxel phantom (Yale University) has been connected to the EGS4 Monte Carlo code. The paper explains the modifications, which have been made, the method of coupling the voxel phantom with the code, and presents results as conversion coefficients between organ equivalent dose and kerma in air for external photon radiation. A comparison of the results with published data shows good agreement. PMID:12146699
Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers
BARTEL, TIMOTHY J.; PLIMPTON, STEVEN J.; GALLIS, MICHAIL A.
2001-10-01
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.
NASA Astrophysics Data System (ADS)
SU, J.; Sagdeev, R.; Usikov, D.; Chin, G.; Boyer, L.; Livengood, T. A.; McClanahan, T. P.; Murray, J.; Starr, R. D.
2013-12-01
Introduction: The leakage flux of lunar neutrons produced by precipitation of galactic cosmic ray (GCR) particles in the upper layer of the lunar regolith and measured by orbital instruments such as the Lunar Exploration Neutron Detector (LEND) is investigated by Monte Carlo simulation. Previous Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the elemental composition of lunar soil [1-6] and its effect on the leakage neutron flux. We investigate effects on the emergent flux that depend on the physical distribution of hydrogen within the regolith. We use the software package GEANT4 [7] to calculate neutron production from spallation by GCR particles [8,9] in the lunar soil. Multiple layers of differing hydrogen/water at different depths in the lunar regolith model are introduced to examine enhancement or suppression of leakage neutron flux. We find that the majority of leakage thermal and epithermal neutrons are produced in 25 cm to 75 cm deep from the lunar surface. Neutrons produced in the shallow top layer retain more of their original energy due to fewer scattering interactions and escape from the lunar surface mostly as fast neutrons. This provides a diagnostic tool in interpreting leakage neutron flux enhancement or suppression due to hydrogen concentration distribution in lunar regolith. We also find that the emitting angular distribution of thermal and epithermal leakage neutrons can be described by cos3/2(theta) where the fast neutrons emitting angular distribution is cos(theta). The energy sensitivity and angular response of the LEND detectors SETN and CSETN are investigated using the leakage neutron spectrum from GEANT4 simulations. A simplified LRO model is used to benchmark MCNPX[10] and GEANT4 on CSETN absolute count rate corresponding to neutron flux from bombardment of 120MV solar potential GCR particles on FAN lunar soil. We are able to interpret the count rates of SETN and
A Two-Dimensional Monte Carlo Code System for Linear Neutron Transport Calculations.
1980-04-24
Version 00 KIM (k-infinite-Monte Carlo) solves the steady-state linear neutron transport equation for a fixed source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional infinite thermal reactor lattice using the Monte Carlo method. In addition to the combinatorial description of domains, the program allows complex configurations to be represented by a discrete set of points whereby the calculation speed is greatly improved. Configurations are described as the result of overlaysmore » of elementary figures over a basic domain.« less
Somasundaram, E.; Palmer, T. S.
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A
NASA Astrophysics Data System (ADS)
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels
2015-03-01
The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.
Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code
NASA Astrophysics Data System (ADS)
Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-01
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can
NASA Astrophysics Data System (ADS)
Gugiu, E. D.; Ellis, R. J.; Dumitrache, I.; Constantin, M.
2005-05-01
The major aim of this work is a sensitivity analysis related to the influence of the different nuclear data libraries on the k-infinity values and on the void coefficient estimations performed for various CANDU fuel projects, and on the simulations related to the replacement of the original stainless steel adjuster rods by cobalt assemblies in the CANDU reactor core. The computations are performed using the Monte Carlo transport codes MCNP5 and MONTEBURNS 1.0 for the actual, detailed geometry and material composition of the fuel bundles and reactivity devices. Some comparisons with deterministic and probabilistic codes involving the WIMS library are also presented.
Qin, Z.; Shoesmith, D.W.
2007-07-01
Based on a probabilistic model previously proposed, a Monte Carlo simulation code (EBSPA) has been developed to predict the lifetime of the engineered barriers system within the Yucca Mountain nuclear waste repository. The degradation modes considered in the EBSPA are general passive corrosion and hydrogen-induced cracking for the drip shield; and general passive corrosion, crevice corrosion and stress corrosion cracking for the waste package. Two scenarios have been simulated using the EBSPA code: (a) a conservative scenario for the conditions thought likely to prevail in the repository, and (b) an aggressive scenario in which the impact of the degradation processes is overstated. (authors)
DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries
NASA Astrophysics Data System (ADS)
Kargaran, H.; Minuchehr, A.; Zolfaghari, A.
2016-07-01
In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.
Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code
Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza
2013-01-01
The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique. PMID:24672765
Wu, Yunzhao; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892
Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng
2014-01-01
In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.
Prettyman, T.H.; Gardner, R.P.; Verghese, K. . Center for Engineering Applications and Radioisotopes)
1993-08-01
A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The code was developed so that the Monte Carlo neophyte can easily use it. A minimum amount of input preparation is required and specified fixed values of the parameters used to control the code operation can be used. The weight windows technique, employing splitting and Russian Roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given here for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity (PNP) tool and found to be very accurate. Results of the experimental validation and details of code performance are presented.
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S.
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
VESTA 2.1.5 - Monte Carlo Depletion Interface Code; AURORA 1.0.0 - Depletion Analysis Tool.
2013-03-21
Version 01 RSICC is authorized to distribute VESTA 2.1.5 for research and education purposes only. Requesters from NEA Data Bank member countries are advised to order VESTA 2.1.5 from the NEA Data Bank. Non-commercial and non-profit users from other OECD member countries (specifically Canada and the United States) may order VESTA 2.1.5 from RSICC. Users from non-OECD member countries and all commercial requesters are advised to contact the IRSN. VESTA is a Monte Carlo depletionmore » interface code that is currently under development at IRSN (France). From its inception, VESTA is intended to be a generic interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be completely tailored to the users needs on practically all aspects of the code. For the current version, VESTA allows for the use of any version of MCNP(X) as the transport module and ORIGEN 2.2 or the built in PHOENIX module as the depletion module. A short overview of the main features of this version of the code is detailed in the Abstract.« less
VESTA 2.1.5 - Monte Carlo Depletion Interface Code; AURORA 1.0.0 - Depletion Analysis Tool.
HAECK, WIM
2013-03-21
Version 01 RSICC is authorized to distribute VESTA 2.1.5 for research and education purposes only. Requesters from NEA Data Bank member countries are advised to order VESTA 2.1.5 from the NEA Data Bank. Non-commercial and non-profit users from other OECD member countries (specifically Canada and the United States) may order VESTA 2.1.5 from RSICC. Users from non-OECD member countries and all commercial requesters are advised to contact the IRSN. VESTA is a Monte Carlo depletion interface code that is currently under development at IRSN (France). From its inception, VESTA is intended to be a generic interface code so that it will ultimately be capable of using any Monte-Carlo code or depletion module and that can be completely tailored to the users needs on practically all aspects of the code. For the current version, VESTA allows for the use of any version of MCNP(X) as the transport module and ORIGEN 2.2 or the built in PHOENIX module as the depletion module. A short overview of the main features of this version of the code is detailed in the Abstract.
Liu, T.; Du, X.; Ji, W.; Xu, X. G.
2013-07-01
This paper describes the development of a Graphics Processing Unit (GPU) accelerated Monte Carlo photon transport code, ARCHER{sub GPU}, to perform CT imaging dose calculations with good accuracy and performance. The code simulates interactions of photons with heterogeneous materials. It contains a detailed CT scanner model and a family of patient phantoms. Several techniques are used to optimize the code for the GPU architecture. In the accuracy and performance test, a 142 kg adult male phantom was selected, and the CT scan protocol involved a whole-body axial scan, 20-mm x-ray beam collimation, 120 kVp and a pitch of 1. A total of 9 x 108 photons were simulated and the absorbed doses to 28 radiosensitive organs/tissues were calculated. The average percentage difference of the results obtained by the general-purpose production code MCNPX and ARCHER{sub GPU} was found to be less than 0.38%, indicating an excellent agreement. The total computation time was found to be 8,689, 139 and 56 minutes for MCNPX, ARCHER{sub CPU} (6-core) and ARCHER{sub GPU}, respectively, indicating a decent speedup. Under a recent grant funding from the NIH, the project aims at developing a Monte Carlo code with the capability of sub-minute CT organ dose calculations. (authors)
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
Probability of initiation and extinction in the Mercury Monte Carlo code
McKinley, M. S.; Brantley, P. S.
2013-07-01
A Monte Carlo method for computing the probability of initiation has previously been implemented in Mercury. Recently, a new method based on the probability of extinction has been implemented as well. The methods have similarities from counting progeny to cycling in time, but they also have differences such as population control and statistical uncertainty reporting. The two methods agree very well for several test problems. Since each method has advantages and disadvantages, we currently recommend that both methods are used to compute the probability of criticality. (authors)
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability
Basic physical and chemical information needed for development of Monte Carlo codes
Inokuti, M.
1993-08-01
It is important to view track structure analysis as an application of a branch of theoretical physics (i.e., statistical physics and physical kinetics in the language of the Landau school). Monte Carlo methods and transport equation methods represent two major approaches. In either approach, it is of paramount importance to use as input the cross section data that best represent the elementary microscopic processes. Transport analysis based on unrealistic input data must be viewed with caution, because results can be misleading. Work toward establishing the cross section data, which demands a wide scope of knowledge and expertise, is being carried out through extensive international collaborations. In track structure analysis for radiation biology, the need for cross sections for the interactions of electrons with DNA and neighboring protein molecules seems to be especially urgent. Finally, it is important to interpret results of Monte Carlo calculations fully and adequately. To this end, workers should document input data as thoroughly as possible and report their results in detail in many ways. Workers in analytic transport theory are then likely to contribute to the interpretation of the results.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code
NASA Astrophysics Data System (ADS)
Askri, Boubaker
2015-10-01
Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.
Characterisation of the TRIUMF neutron facility using a Monte Carlo simulation code.
Monk, S D; Abram, T; Joyce, M J
2015-04-01
Here, the characterisation of the high-energy neutron field at TRIUMF (The Tri Universities Meson Facility, Vancouver, British Columbia) with Monte Carlo simulation software is described. The package used is MCNPX version 2.6.0, with the neutron fluence rate determined at three locations within the TRIUMF Thermal Neutron Facility (TNF), including the exit of the neutron channel where users of the facility can test devices that may be susceptible to the effects of this form of radiation. The facility is often used to roughly emulate the field likely to be encountered at high altitudes due to radiation of galactic origin and thus the simulated information is compared with the energy spectrum calculated to be due to neutron radiation of cosmic origin at typical aircraft altitudes. The calculated values were also compared with neutron flux measurements that were estimated using the activation of various foils by the staff of the facility, showing agreement within an order of magnitude. PMID:25342608
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan; Leal, Luiz C.
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependent cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.
Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code
Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan; Leal, Luiz C.
2015-11-06
In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less
Characterisation of the TRIUMF neutron facility using a Monte Carlo simulation code.
Monk, S D; Abram, T; Joyce, M J
2015-04-01
Here, the characterisation of the high-energy neutron field at TRIUMF (The Tri Universities Meson Facility, Vancouver, British Columbia) with Monte Carlo simulation software is described. The package used is MCNPX version 2.6.0, with the neutron fluence rate determined at three locations within the TRIUMF Thermal Neutron Facility (TNF), including the exit of the neutron channel where users of the facility can test devices that may be susceptible to the effects of this form of radiation. The facility is often used to roughly emulate the field likely to be encountered at high altitudes due to radiation of galactic origin and thus the simulated information is compared with the energy spectrum calculated to be due to neutron radiation of cosmic origin at typical aircraft altitudes. The calculated values were also compared with neutron flux measurements that were estimated using the activation of various foils by the staff of the facility, showing agreement within an order of magnitude.
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Khajeh, Masoud; Safigholi, Habib
2016-03-01
A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563
Code System to Perform Monte Carlo Simulation of Electron Gamma-Ray Showers in Arbitrary Marerials.
2002-10-15
Version 00 PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary materials. Initially, it was devised to simulate the PENetration and Energy LOss of Positrons and Electrons in matter; photons were introduced later. The adopted scattering model gives a reliable description of radiation transport in the energy range from a few hundred eV to about 1GeV. PENELOPE generates random electron-photon showers in complex material structures consisting of any number of distinct homogeneous regions (bodies)more » with different compositions. The Penelope Forum list archives and other information can be accessed at http://www.nea.fr/lists/penelope.html. PENELOPE-MPI extends capabilities of PENELOPE-2001 (RSICC C00682MNYCP02; NEA-1525/05) by providing for usage of MPI type parallel drivers and extends the original version's ability to read different types of input data sets such as voxel. The motivation is to increase efficiency of Monte Carlo simulations for medical applications. The physics of the calculations have not been changed, and the original description of PENELOPE-2001 (which follows) is still valid. PENELOPE-2001 contains substantial changes and improvements to the previous versions 1996 and 2000. As for the physics, the model for electron/positron elastic scattering has been revised. Bremsstrahlung emission is now simulated using partial-wave data instead of analytical approximate formulae. Photoelectric absorption in K and L-shells is described from the corresponding partial cross sections. Fluorescence radiation from vacancies in K and L-shells is followed. Refinements were also introduced in electron/positron transport mechanics, mostly to account for energy dependence of the mean free paths for hard events. Simulation routines were re-programmed in a more structured way, and new example MAIN programs were written with a more flexible input and expanded output.« less
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
NASA Astrophysics Data System (ADS)
Ghoos, K.; Dekeyser, W.; Samaey, G.; Börner, P.; Baelmans, M.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.
Walsh, J. A.; Palmer, T. S.; Urbatsch, T. J.
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Present status of vectorized Monte Carlo
Brown, F.B.
1987-01-01
Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.
PUVA: A Monte Carlo code for intra-articular PUVA treatment of arthritis
Descalle, M.A.; Laing, T.J.; Martin, W.R.
1996-12-31
Current rheumatoid arthritis treatments are only partially successful. Intra-articular psoralen-ultraviolet light (PUVA) phototherapy appears to be a new and valid alternative. Ultraviolet laser light (UVA) delivered in the knee joint through a fiber optic is used in combination with 8-methoxypsoralen (8-MOP), a light-sensitive chemical administered orally. A few hours after ingestion, the psoralen has diffused in all body cells. Once activated by UVA light, it binds to biological molecules, inhabiting cell division and ultimately causing local control of the arthritis. The magnitude of the response is proportional to the number of photoproducts delivered to tissues (i.e., the number of absorbed photons): the PUVA treatment will only be effective if a sufficient and relatively uniform dose is delivered to the diseased synovial tissues, while sparing other tissues such as cartilage. An application is being developed, based on analog Monte Carlo methods, to predict photon densities in tissues and the minimum number of intra-articular catheter positions necessary to ensure proper treatment of the diseased zone. Other interesting aspects of the problem deal with the compexity of the joint geometry, the physics of light scattering in tissues (a relatively new field of research that is not fully understood because of the variety of tissues and tissue components), and, finally, the need to include optic laws (reflection and refraction) at interfaces.
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Oehlke, Elisabeth; Mostacci, Domiziano; Schaffer, Paul; Trinczek, Michael; Hoehr, Cornelia
2016-01-01
The Monte Carlo code FLUKA is used to simulate the production of a number of positron emitting radionuclides, 18F, 13N, 94Tc, 44Sc, 68Ga, 86Y, 89Zr, 52Mn, 61Cu and 55Co, on a small medical cyclotron with a proton beam energy of 13 MeV. Experimental data collected at the TR13 cyclotron at TRIUMF agree within a factor of 0.6 ± 0.4 with the directly simulated data, except for the production of 55Co, where the simulation underestimates the experiment by a factor of 3.4 ± 0.4. The experimental data also agree within a factor of 0.8 ± 0.6 with the convolution of simulated proton fluence and cross sections from literature. Overall, this confirms the applicability of FLUKA to simulate radionuclide production at 13 MeV proton beam energy.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE.
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258
Charles A. Wemple; Joshua J. Cogliati
2005-04-01
A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.
NASA Astrophysics Data System (ADS)
La Tessa, Chiara; Mancusi, Davide; Rinaldi, Adele; di Fino, Luca; Zaconte, Veronica; Larosa, Marianna; Narici, Livio; Gustafsson, Katarina; Sihver, Lembit
ALTEA-Space is the principal in-space experiment of an international and multidisciplinary project called ALTEA (Anomalus Long Term Effects on Astronauts). The measurements were performed on the International Space Station between August 2006 and July 2007 and aimed at characterising the space radiation environment inside the station. The analysis of the collected data provided the abundances of elements with charge 5 ≤ Z ≤ 26 and energy above 100 MeV/nucleon. The same results have been obtained by simulating the experiment with the three-dimensional Monte Carlo code PHITS (Particle and Heavy Ion Transport System). The simulation reproduces accurately the composition of the space radiation environment as well as the geometry of the experimental apparatus; moreover the presence of several materials, e.g. the spacecraft hull and the shielding, that surround the device has been taken into account. An estimate of the abundances has also been calculated with the help of experimental fragmentation cross sections taken from literature and predictions of the deterministic codes GNAC, SihverCC and Tripathi97. The comparison between the experimental and simulated data has two important aspects: it validates the codes giving possible hints how to benchmark them; it helps to interpret the measurements and therefore have a better understanding of the results.
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries. PMID:21147784
Application des codes de Monte Carlo à la radiothérapie par rayonnement à faible TEL
NASA Astrophysics Data System (ADS)
Marcié, S.
1998-04-01
In radiation therapy, there is low LET rays: photons of 60Co, photons and electrons to 4 at 25 MV created in a linac, photons 137Cs, of 192Ir and of 125I. To know the most exactly possible the dose to the tissu by this rays, software and measurements are used. With the development of the power and the capacity of computers, the application of Monte Carlo codes expand to the radiation therapy which have permitted to better determine effects of rays and spectra, to explicit parameters used in dosimetric calculation, to verify algorithms , to study measuremtents systems and phantoms, to calculate the dose in inaccessible points and to consider the utilization of new radionuclides. En Radiothérapie, il existe une variété, de rayonnements ? faible TLE : photons du cobalt 60, photons et ,électron de 4 à? 25 MV générés dans des accélérateurs linéaires, photons du césium 137, de l'iridium 192 et de l'iode 125. Pour connatre le plus exactement possible la dose délivrée aux tissus par ces rayonnements, des logiciels sont utilisés ainsi que des instruments de mesures. Avec le développement de la puissance et de la capacité, des calculateurs, l'application des codes de Monte Carlo s'est ,étendue ? la Radiothérapie ce qui a permis de mieux cerner les effets des rayonnements, déterminer les spectres, préciser les valeurs des paramètres utilisés dans les calculs dosimétriques, vérifier les algorithmes, ,étudier les systèmes de mesures et les fantomes utilisés, calculer la dose en des points inaccessibles ?à la mesure et envisager l'utilisation de nouveaux radio,éléments.
G. S. Chang; R. C. Pederson
2005-07-01
Mixed oxide (MOX) test capsules prepared with weapons-derived plutonium have been irradiated to a burnup of 50 GWd/t. The MOX fuel was fabricated at Los Alamos National Laboratory by a master-mix process and has been irradiated in the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Previous withdrawals of the same fuel have occurred at 9, 21, 30, and 40 GWd/t. Oak Ridge National Laboratory (ORNL) manages this test series for the Department of Energy’s Fissile Materials Disposition Program (FMDP). The fuel burnup analyses presented in this study were performed using MCWO, a welldeveloped tool that couples the Monte Carlo transport code MCNP with the isotope depletion and buildup code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations for the ATR small I-irradiation test position. The purpose of this report is to validate both the Weapons-Grade Mixed Oxide (WG-MOX) test assembly model and the new fuel burnup analysis methodology by comparing the computed results against the neutron monitor measurements.
Yakoumakis, E; Tzamicha, E; Dimitriadis, A; Georgiou, E; Tsapaki, V; Chalazonitis, A
2015-07-01
Mammography is a standard procedure that facilitates breast cancer detection. Initial results of contrast-enhanced digital mammography (CEDM) are promising. The purpose of this study is to assess the CEDM radiation dose using a Monte Carlo code. EGSnrc MC code was used to simulate the interaction of photons with matter and estimate the glandular dose (Dg). A voxel female human phantom with a 2-8-cm breast thickness range and a breast glandular composition of 50 % was applied. Dg values ranged between 0.96 and 1.45 mGy (low and high energy). Dg values for a breast thickness of 5.0 cm and a glandular fraction of 50 % for craniocaudal and mediolateral oblique view were 1.12 (low energy image contribution is 0.98 mGy) and 1.07 (low energy image contribution is 0.95 mGy), respectively. The low kV part of CEDM is the main contributor to total glandular breast dose.
NASA Astrophysics Data System (ADS)
Lee, Y.-K.; Brun, E.
2014-04-01
The Sodium-cooled fast neutron reactor ASTRID is currently under design and development in France. Traditional ECCO/ERANOS fast reactor code system used for ASTRID core design calculations relies on multi-group JEFF-3.1.1 data library. To gauge the use of ENDF/B-VII.0 and JEFF-3.1.1 nuclear data libraries in the fast reactor applications, two recent OECD/NEA computational benchmarks specified by Argonne National Laboratory were calculated. Using the continuous-energy TRIPOLI-4 Monte Carlo transport code, both ABR-1000 MWth MOX core and metallic (U-Pu) core were investigated. Under two different fast neutron spectra and two data libraries, ENDF/B-VII.0 and JEFF-3.1.1, reactivity impact studies were performed. Using JEFF-3.1.1 library under the BOEC (Beginning of equilibrium cycle) condition, high reactivity effects of 808 ± 17 pcm and 1208 ± 17 pcm were observed for ABR-1000 MOX core and metallic core respectively. To analyze the causes of these differences in reactivity, several TRIPOLI-4 runs using mixed data libraries feature allow us to identify the nuclides and the nuclear data accounting for the major part of the observed reactivity discrepancies.
NASA Astrophysics Data System (ADS)
Bardenet, Rémi
2013-07-01
Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Giantsoudi, D; Schuemann, J; Dowdell, S; Paganetti, H; Jia, X; Jiang, S
2014-06-15
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Using SERPENT Monte Carlo and Burnup code to model Traveling Wave Reactors (TWR)
NASA Astrophysics Data System (ADS)
Gulik, Volodymyr; Pavlovych, Volodymyr; Tkaczyk, Alan Henry
2014-06-01
This paper is mainly devoted to the proof-of-principle implementation of the SERPENT code for the simulation of traveling wave reactors. The investigation of SERPENT 1.1.19 code for multiprocessor tasks with long burnup steps was performed. The investigation of SERPENT 2 code for multiprocessor tasks with long burnup steps was performed. Methods to remove the influence of the ignition zone were considered, and neutronics simulations with various fragmentations of the burnup zone were performed.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
1991-08-01
Version: 00 The original MORSE code was a multipurpose neutron and gamma-ray transport Monte Carlo code. It was designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems could be obtained in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry could be used with an albedo option available atmore » any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution was allowed. MORSE-CG incorporated the Mathematical Applications, Inc. (MAGI) combinatorial geometry routines. MORSE-B modifies the Monte Carlo neutron and photon transport computer code MORSE-CG by adding routines which allow various flexible options.« less
NASA Astrophysics Data System (ADS)
Kahraman, A.; Kaya, S.; Jaksic, A.; Yilmaz, E.
2015-05-01
Radiation-sensing Field Effect Transistors (RadFETs or MOSFET dosimeters) with SiO2 gate dielectric have found applications in space, radiotherapy clinics, and high-energy physics laboratories. More sensitive RadFETs, which require modifications in device design, including gate dielectric, are being considered for personal dosimetry applications. This paper presents results of a detailed study of the RadFET energy response simulated with PENELOPE Monte Carlo code. Alternative materials to SiO2 were investigated to develop high-efficiency new radiation sensors. Namely, in addition to SiO2, Al2O3 and HfO2 were simulated as gate material and deposited energy amounts in these layers were determined for photon irradiation with energies between 20 keV and 5 MeV. The simulations were performed for capped and uncapped configurations of devices irradiated by point and extended sources, the surface area of which is the same with that of the RadFETs. Energy distributions of transmitted and backscattered photons were estimated using impact detectors to provide information about particle fluxes within the geometrical structures. The absorbed energy values in the RadFETs material zones were recorded. For photons with low and medium energies, the physical processes that affect the absorbed energy values in different gate materials are discussed on the basis of modelling results. The results show that HfO2 is the most promising of the simulated gate materials.
NASA Astrophysics Data System (ADS)
Mohanty, P. K.; Dugad, S. R.; Gupta, S. K.
2012-04-01
A detailed description of a compact Monte Carlo simulation code "G3sim" for studying the performance of a plastic scintillator detector with wavelength shifter (WLS) fiber readout is presented. G3sim was developed for optimizing the design of new scintillator detectors used in the GRAPES-3 extensive air shower experiment. Propagation of the blue photons produced by the passage of relativistic charged particles in the scintillator is treated by incorporating the absorption, total internal, and diffuse reflections. Capture of blue photons by the WLS fibers and subsequent re-emission of longer wavelength green photons is appropriately treated. The trapping and propagation of green photons inside the WLS fiber is treated using the laws of optics for meridional and skew rays. Propagation time of each photon is taken into account for the generation of the electrical signal at the photomultiplier. A comparison of the results from G3sim with the performance of a prototype scintillator detector showed an excellent agreement between the simulated and measured properties. The simulation results can be parametrized in terms of exponential functions providing a deeper insight into the functioning of these versatile detectors. G3sim can be used to aid the design and optimize the performance of scintillator detectors prior to actual fabrication that may result in a considerable saving of time, labor, and money spent.
Interpretation of 3D void measurements with Tripoli4.6/JEFF3.1.1 Monte Carlo code
Blaise, P.; Colomba, A.
2012-07-01
The present work details the first analysis of the 3D void phase conducted during the EPICURE/UM17x17/7% mixed UOX/MOX configuration. This configuration is composed of a homogeneous central 17x17 MOX-7% assembly, surrounded by portions of 17x17 1102 assemblies with guide-tubes. The void bubble is modelled by a small waterproof 5x5 fuel pin parallelepiped box of 11 cm height, placed in the centre of the MOX assembly. This bubble, initially placed at the core mid-plane, is then moved in different axial positions to study the evolution in the core of the axial perturbation. Then, to simulate the growing of this bubble in order to understand the effects of increased void fraction along the fuel pin, 3 and 5 bubbles have been stacked axially, from the core mid-plane. The C/E comparison obtained with the Monte Carlo code Tripoli4 for both radial and axial fission rate distributions, and in particular the reproduction of the very important flux gradients at the void/water interfaces, changing as the bubble is displaced along the z-axis are very satisfactory. It demonstrates both the capability of the code and its library to reproduce this kind of situation, as the very good quality of the experimental results, confirming the UM-17x17 as an excellent experimental benchmark for 3D code validation. This work has been performed within the frame of the V and V program for the future APOLL03 deterministic code of CEA starting in 2012, and its V and V benchmarking database. (authors)
NASA Astrophysics Data System (ADS)
Kim, Jung-Ha; Hill, Robin; Kuncic, Zdenka
2012-07-01
The Monte Carlo (MC) method has proven invaluable for radiation transport simulations to accurately determine radiation doses and is widely considered a reliable computational measure that can substitute a physical experiment where direct measurements are not possible or feasible. In the EGSnrc/BEAMnrc MC codes, there are several user-specified parameters and customized transport algorithms, which may affect the calculation results. In order to fully utilize the MC methods available in these codes, it is essential to understand all these options and to use them appropriately. In this study, the effects of the electron transport algorithms in EGSnrc/BEAMnrc, which are often a trade-off between calculation accuracy and efficiency, were investigated in the buildup region of a homogeneous water phantom and also in a heterogeneous phantom using the DOSRZnrc user code. The algorithms and parameters investigated include: boundary crossing algorithm (BCA), skin depth, electron step algorithm (ESA), global electron cutoff energy (ECUT) and electron production cutoff energy (AE). The variations in calculated buildup doses were found to be larger than 10% for different user-specified transport parameters. We found that using BCA = EXACT gave the best results in terms of accuracy and efficiency in calculating buildup doses using DOSRZnrc. In addition, using the ESA = PRESTA-I option was found to be the best way of reducing the total calculation time without losing accuracy in the results at high energies (few keV ∼ MeV). We also found that although choosing a higher ECUT/AE value in the beam modelling can dramatically improve computation efficiency, there is a significant trade-off in surface dose uncertainty. Our study demonstrates that a careful choice of user-specified transport parameters is required when conducting similar MC calculations.
NASA Astrophysics Data System (ADS)
Fracchiolla, F.; Lorentini, S.; Widesott, L.; Schwarz, M.
2015-11-01
We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ -Passing Rate (PR) > 95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ -PR > 93%) than the one coming from TPS (γ -PR < 88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.
Fracchiolla, F; Lorentini, S; Widesott, L; Schwarz, M
2015-11-01
We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ-Passing Rate (PR) > 95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ-PR > 93%) than the one coming from TPS (γ-PR < 88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.
Gohar, Y.; Zhong, Z.; Talamo, A.; Nuclear Engineering Division
2009-06-09
Argonne National Laboratory (ANL) of USA and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the conceptual design development of an electron accelerator driven subcritical (ADS) facility, using the KIPT electron accelerator. The neutron source of the subcritical assembly is generated from the interaction of 100 KW electron beam with a natural uranium target. The electron beam has a uniform spatial distribution and electron energy in the range of 100 to 200 MeV. The main functions of the subcritical assembly are the production of medical isotopes and the support of the Ukraine nuclear power industry. Neutron physics experiments and material structure analyses are planned using this facility. With the 100 KW electron beam power, the total thermal power of the facility is {approx}375 kW including the fission power of {approx}260 kW. The burnup of the fissile materials and the buildup of fission products reduce continuously the reactivity during the operation, which reduces the neutron flux level and consequently the facility performance. To preserve the neutron flux level during the operation, fuel assemblies should be added after long operating periods to compensate for the lost reactivity. This process requires accurate prediction of the fuel burnup, the decay behavior of the fission produces, and the introduced reactivity from adding fresh fuel assemblies. The recent developments of the Monte Carlo computer codes, the high speed capability of the computer processors, and the parallel computation techniques made it possible to perform three-dimensional detailed burnup simulations. A full detailed three-dimensional geometrical model is used for the burnup simulations with continuous energy nuclear data libraries for the transport calculations and 63-multigroup or one group cross sections libraries for the depletion calculations. Monte Carlo Computer code MCNPX and MCB are utilized for this study. MCNPX transports the
ITS Version 4.0: Electron/photon Monte Carlo transport codes
Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.
1995-07-01
The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.
Thermal neutron response of a boron-coated GEM detector via GEANT4 Monte Carlo code.
Jamil, M; Rhee, J T; Kim, H G; Ahmad, Farzana; Jeon, Y J
2014-10-22
In this work, we report the design configuration and the performance of the hybrid Gas Electron Multiplier (GEM) detector. In order to make the detector sensitive to thermal neutrons, the forward electrode of the GEM has been coated with the enriched boron-10 material, which works as a neutron converter. A total of 5×5cm(2) configuration of GEM has been used for thermal neutron studies. The response of the detector has been estimated via using GEANT4 MC code with two different physics lists. Using the QGSP_BIC_HP physics list, the neutron detection efficiency was determined to be about 3%, while with QGSP_BERT_HP physics list the efficiency was around 2.5%, at the incident thermal neutron energies of 25meV. The higher response of the detector proves that GEM-coated with boron converter improves the efficiency for thermal neutrons detection.
NASA Astrophysics Data System (ADS)
Carvajal, M. A.; García-Pareja, S.; Guirado, D.; Vilches, M.; Anguiano, M.; Palma, A. J.; Lallena, A. M.
2009-10-01
In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of ~5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a 60Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0o, 15o, 30o, 45o, 60o and 75o. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered.
Carvajal, M A; García-Pareja, S; Guirado, D; Vilches, M; Anguiano, M; Palma, A J; Lallena, A M
2009-10-21
In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of approximately 5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a (60)Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 degrees, 15 degrees, 30 degrees, 45 degrees, 60 degrees and 75 degrees. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered. PMID:19794247
A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code
NASA Astrophysics Data System (ADS)
Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.
2013-12-01
The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness,
Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.
2006-07-01
The measurement of the stationarity of Monte Carlo fission source distributions in k{sub eff} calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)
Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study
NASA Astrophysics Data System (ADS)
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-01
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
Yakoumakis, E; Dimitriadis, A; Gialousis, G; Makri, T; Karavasilis, E; Yakoumakis, N; Georgiou, E
2015-02-01
Radiation protection and estimation of the radiological risk in paediatric radiology is essential due to children's significant radiosensitivity and their greater overall health risk. The purpose of this study was to estimate the organ and effective doses of paediatric patients undergoing barium meal (BM) examinations and also to evaluate the assessment of radiation Risk of Exposure Induced cancer Death (REID) to paediatric patients undergoing BM examinations. During the BM studies, fluoroscopy and multiple radiographs are involved. Since direct measurements of the dose in each organ are very difficult if possible at all, clinical measurements of dose-area products (DAPs) and the PCXMC 2.0 Monte Carlo code were involved. In clinical measurements, DAPs were assessed during examination of 51 patients undergoing BM examinations, separated almost equally in three age categories, neonatal, 1- and 5-y old. Organs receiving the highest amounts of radiation during BM examinations were as follows: the stomach (10.4, 10.2 and 11.1 mGy), the gall bladder (7.1, 5.8 and 5.2 mGy) and the spleen (7.5, 8.2 and 4.3 mGy). The three values in the brackets correspond to neonatal, 1- and 5-y-old patients, respectively. For all ages, the main contributors to the total organ and effective doses are the fluoroscopy projections. The average DAP values and absorbed doses to patient were higher for the left lateral projections. The REID was calculated for boys (4.8 × 10(-2), 3.0 × 10(-2) and 2.0 × 10(-2) %) for neonatal, 1- and 5-y old patients, respectively. The corresponding values for girl patients were calculated (12.1 × 10(-2), 5.5 × 10(-2) and 3.4 × 10(-2) %).
NASA Astrophysics Data System (ADS)
Shinohara, Kouji; Suzuki, Yasuhiro; Kim, Junghee; Kim, Jun Young; Jeon, Young Mu; Bierwage, Andreas; Rhee, Tongnyeol
2016-11-01
The fast ion dynamics and the associated heat load on the plasma facing components in the KSTAR tokamak were investigated with the orbit following Monte-Carlo (OFMC) code in several magnetic field configurations and realistic wall geometry. In particular, attention was paid to the effect of resonant magnetic perturbation (RMP) fields. Both the vacuum field approximation as well as the self-consistent field that includes the response of a stationary plasma were considered. In both cases, the magnetic perturbation (MP) is dominated by the toroidal mode number n = 1, but otherwise its structure is strongly affected by the plasma response. The loss of fast ions increased significantly when the MP field was applied. Most loss particles hit the poloidal limiter structure around the outer mid-plane on the low field side, but the distribution of heat loads across the three limiters varied with the form of the MP. Short-timescale loss of supposedly well-confined co-passing fast ions was also observed. These losses started within a few poloidal transits after the fast ion was born deep inside the plasma on the high-field side of the magnetic axis. In the configuration studied, these losses are facilitated by the combination of two factors: (i) the large magnetic drift of fast ions across a wide range of magnetic surfaces due to a low plasma current, and (ii) resonant interactions between the fast ions and magnetic islands that were induced inside the plasma by the external RMP field. These effects are expected to play an important role in present-day tokamaks.
NASA Astrophysics Data System (ADS)
Kamiya, Yoshitomo
The possibility of quantum spin liquids, characterized by nontrivial entanglement properties or a topological nonlocal order parameter, has long been debated both theoretically and experimentally. Since candidate systems (e.g., frustrated quantum magnets or 5 d transition metal oxides) may host other competing phases including conventional magnetic ordered phases, it is natural to ask what types of global phase diagrams can be anticipated depending on coupling constants, temperature, dimensionality, etc. In this talk, by considering an extension of the Kitaev toric code Hamiltonians by Ising interactions on 2D (square) and 3D (cubic) lattices, I will present thermodynamic phase diagrams featuring magnetic ``three states of matter,'' namely, quantum spin liquid, paramagnetic, and magnetically ordered phases (analogous to liquid, gas, and solid, respectively, in conventional matter) obtained by unbiased quantum Monte Carlo simulations [YK, Y. Kato, J. Nasu, and Y. Motome, PRB 92, 100403(R) (2015)]. We find that the ordered phase borders on the spin liquid around the exactly solvable point by a discontinuous transition line in 3D, while it grows continuously from the quantum critical point in 2D. In both cases, peculiar proximity effects to the nearby spin liquid phases are observed at high temperature even when the ground state is magnetically ordered. Such proximity effects include flux-shrinking and a tricritical behavior in 3D and a ``fractionalization'' of the order parameter field at the quantum critical point in 2D, both of which can be detected by measuring critical exponents. Work done in collaboration with Yasuyuki Kato, Joji Nasu, and Yukitoshi Motome.
Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
Brooks, E.D. III )
1989-08-01
We introduce a new implicit Monte Carlo technique for solving time dependent radiation transport problems involving spontaneous emission. In the usual implicit Monte Carlo procedure an effective scattering term in dictated by the requirement of self-consistency between the transport and implicitly differenced atomic populations equations. The effective scattering term, a source of inefficiency for optically thick problems, becomes an impasse for problems with gain where its sign is negative. In our new technique the effective scattering term does not occur and the excecution time for the Monte Carlo portion of the algorithm is independent of opacity. We compare the performance and accuracy of the new symbolic implicit Monte Carlo technique to the usual effective scattering technique for the time dependent description of a two-level system in slab geometry. We also examine the possibility of effectively exploiting multiprocessors on the algorithm, obtaining supercomputer performance using shared memory multiprocessors based on cheap commodity microprocessor technology. {copyright} 1989 Academic Press, Inc.
NASA Astrophysics Data System (ADS)
Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc
2012-06-01
This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.
Fracchiolla, F; Lorentini, S; Widesott, L; Schwarz, M
2015-11-01
We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ-Passing Rate (PR) > 95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ-PR > 93%) than the one coming from TPS (γ-PR < 88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process. PMID
NASA Astrophysics Data System (ADS)
Tani, K.; Shinohara, K.; Oikawa, T.; Tsutsui, H.; McClements, K. G.; Akers, R. J.; Liu, Y. Q.; Suzuki, M.; Ide, S.; Kusama, Y.; Tsuji-Iio, S.
2016-11-01
As part of the verification and validation of a newly developed non-steady-state orbit-following Monte-Carlo code, application studies of time dependent neutron rates have been made for a specific shot in the Mega Amp Spherical Tokamak (MAST) using 3D fields representing vacuum resonant magnetic perturbations (RMPs) and toroidal field (TF) ripples. The time evolution of density, temperature and rotation rate in the application of the code to MAST are taken directly from experiment. The calculation results approximately agree with the experimental data. It is also found that a full orbit-following scheme is essential to reproduce the neutron rates in MAST.
Abramov, B. M.; Alekseev, P. N.; Borodin, Yu. A.; Bulychjov, S. A.; Dukhovskoy, I. A.; Krutenkova, A. P.; Martemianov, M. A.; Matsyuk, M. A.; Turdakina, E. N.; Khanov, A. I.; Mashnik, Stepan Georgievich
2015-02-03
Momentum spectra of hydrogen isotopes have been measured at 3.5° from ^{12}C fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Choi, M.; Chan, V. S.; Green, D.; Jaeger, E. F.; Berry, L. A.; Heidbrink, W. W.
2009-11-26
To fully account for finite drift orbit effect of fast ions on wave-particle interaction in ion-cyclotron radio frequency (ICRF) heating experiments in tokamaks, the 5-D finite orbit Monte-Carlo plasma distribution solver ORBIT-RF is coupled with the 2-D full wave code AORSA in a self-consistent way. Comparison results of ORBIT-RF/AORSA simulation against fast-ion D{sub {alpha}}(FIDA) measurement of fast-ion distribution as well as CQL3D/ray-tracing simulation with zero-orbit approximation in the DIII-D ICRF wave beam-ion acceleration experiment are presented. Preliminary ORBIT-RF/AORSA results suggest that finite orbit width effects may explain the outward radial shift of the spatial profile measured by FIDA.
Górka, B; Nilsson, B; Fernández-Varea, J M; Svensson, R; Brahme, A
2006-08-01
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect
NASA Astrophysics Data System (ADS)
Górka, B.; Nilsson, B.; Fernández-Varea, J. M.; Svensson, R.; Brahme, A.
2006-08-01
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 µm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 µm thick CVD-diamond layer with 0.2 µm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) 60Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect electrode
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Capogni, M; Lo Meo, S; Fazio, A
2010-01-01
Two CERN Monte Carlo codes, i.e. GEANT3.21 and GEANT4, were compared. The specific routine (sch2for), implemented in GEANT3.21 to simulate a disintegration process, and the G4RadioactiveDecay class, provided by GEANT4, were used for the computation of the full-energy-peak and total efficiencies of several radionuclides. No reference to experimental data was involved. A level of agreement better than 1% for the total efficiency and a deviation lower than 3.5% for the full-energy-peak efficiencies were found.
T. Oda; M. Shimada; K. Zhang; P. Calderoni; Y. Oya; M. Sokolov; R. Kolasinski
2011-11-01
The behavior of hydrogen isotopes implanted into tungsten containing vacancies was simulated using a Monte Carlo technique. The correlations between the distribution of implanted deuterium and fluence, trap density and trap distribution were evaluated. Throughout the present study, qualitatively understandable results were obtained. In order to improve the precision of the model and obtain quantitatively reliable results, it is necessary to deal with the following subjects: (1) how to balance long-time irradiation processes with a rapid diffusion process, (2) how to prevent unrealistic accumulation of hydrogen, and (3) how to model the release of hydrogen forcibly loaded into a region where hydrogen densely exist already.
A new method to calculate the response of the WENDI-II rem counter using the FLUKA Monte Carlo Code
NASA Astrophysics Data System (ADS)
Jägerhofer, Lukas; Feldbaumer, Eduard; Theis, Christian; Roesler, Stefan; Vincke, Helmut
2012-11-01
The FHT-762 WENDI-II is a commercially available wide range neutron rem counter which uses a 3He counter tube inside a polyethylene moderator. To increase the response above 10 MeV of kinetic neutron energy, a layer of tungsten powder is implemented into the moderator shell. For the purpose of the characterization of the response, a detailed model of the detector was developed and implemented for FLUKA Monte Carlo simulations. In common practice Monte Carlo simulations are used to calculate the neutron fluence inside the active volume of the detector. The resulting fluence is then folded offline with the reaction rate of the 3He(n,p)3H reaction to yield the proton-triton production rate. Consequently this approach does not consider geometrical effects like wall effects, where one or both reaction products leave the active volume of the detector without triggering a count. This work introduces a two-step simulation method which can be used to determine the detector's response, including geometrical effects, directly, using Monte Carlo simulations. A "first step" simulation identifies the 3He(n,p)3H reaction inside the active volume of the 3He counter tube and records its position. In the "second step" simulation the tritons and protons are started in accordance with the kinematics of the 3He(n,p)3H reaction from the previously recorded positions and a correction factor for geometrical effects is determined. The three dimensional Monte Carlo model of the detector as well as the two-step simulation method were evaluated and tested in the well-defined fields of an 241Am-Be(α,n) source as well as in the field of a 252Cf source. Results were compared with measurements performed by Gutermuth et al. [1] at GSI with an 241Am-Be(α,n) source as well as with measurements performed by the manufacturer in the field of a 252Cf source. Both simulation results show very good agreement with the respective measurements. After validating the method, the response values in terms of
Monte Carlo neutrino oscillations
Kneller, James P.; McLaughlin, Gail C.
2006-03-01
We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.
ERIC Educational Resources Information Center
Houser, Larry L.
1981-01-01
Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)
Adjoint electron-photon transport Monte Carlo calculations with ITS
Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.; Morel, J.E.
1995-02-01
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code.
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-10-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin-Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy.
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-01-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin–Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy. PMID:25426430
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code.
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-10-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin-Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy. PMID:25426430
NASA Astrophysics Data System (ADS)
Malouch, Fadhel
2016-02-01
An irradiation program DV50 was carried out from 2002 to 2006 in the OSIRIS material testing reactor (CEA-Saclay center) to assess the pressure vessel steel toughness curve for a fast neutron fluence (E > 1 MeV) equivalent to a French 900-MWe PWR lifetime of 50 years. This program allowed the irradiation of 120 specimens out of vessel steel, subdivided in two successive irradiations DV50 n∘1 and DV50 n∘2. To measure the fast neutron fluence (E > 1 MeV) received by specimens after each irradiation, sample holders were equipped with activation foils that were withdrawn at the end of irradiation for activity counting and processing. The fast effective cross-sections used in the dosimeter processing were determined with a specific calculation scheme based on the Monte-Carlo code TRIPOLI-3 (and the nuclear data ENDF/B-VI and IRDF-90). In order to put vessel-steel experiments at the same standard, a new dosimetric interpretation of the DV50 experiment has been performed by using the Monte-Carlo code TRIPOLI-4 and more recent nuclear data (JEFF3.1.1 and IRDF-2002). This paper presents a comparison of previous and recent calculations performed for the DV50 vessel-steel experiment to assess the impact on the dosimetric interpretation.
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images.
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2013-11-21
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 10(8) primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*
Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G
2014-01-01
Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image
1991-05-01
Version 00 MORSE-CGA was developed to add the capability of modelling rectangular lattices for nuclear reactor cores or for multipartitioned structures. It thus enhances the capability of the MORSE code system. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. It has been designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems may be obtainedmore » in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution is allowed.« less
Monte Carlo Shielding Analysis Capabilities with MAVRIC
Peplow, Douglas E.
2011-01-01
Monte Carlo shielding analysis capabilities in SCALE 6 are centered on the CADIS methodology Consistent Adjoint Driven Importance Sampling. CADIS is used to create an importance map for space/energy weight windows as well as a biased source distribution. New to SCALE 6 are the Monaco functional module, a multi-group fixed-source Monte Carlo transport code, and the MAVRIC sequence (Monaco with Automated Variance Reduction Using Importance Calculations). MAVRIC uses the Denovo code (also new to SCALE 6) to compute coarse-mesh discrete ordinates solutions which are used by CADIS to form an importance map and biased source distribution for the Monaco Monte Carlo code. MAVRIC allows the user to optimize the Monaco calculation for a specify tally using the CADIS method with little extra input compared to a standard Monte Carlo calculation. When computing several tallies at once or a mesh tally over a large volume of space, an extension of the CADIS method called FW-CADIS can be used to help the Monte Carlo simulation spread particles over phase space to get more uniform relative uncertainties.
Sarrut, David; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Létang, Jean-Michel; Jan, Sébastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Conlin, Jeremy Lloyd; Tobin, Stephen J
2010-10-13
There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed.
State-of-the-art Monte Carlo 1988
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Monte Carlo electron/photon transport
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs.
Monte Carlo techniques for analyzing deep-penetration problems
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1986-02-01
Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications.
GOORLEY, TIM
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
GOORLEY, TIM
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude ofmore » particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude ofmore » particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model
Monte Carlo surface flux tallies
Favorite, Jeffrey A
2010-11-19
Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.
Fast quantum Monte Carlo on a GPU
NASA Astrophysics Data System (ADS)
Lutsyshyn, Y.
2015-02-01
We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.
Doerner, Edgardo; Caprile, Paola
2016-01-01
The shape of the radiation source of a linac has a direct impact on the delivered dose distributions, especially in the case of small radiation fields. Traditionally, a single Gaussian source model is used to describe the electron beam hitting the target, although different studies have shown that the shape of the electron source can be better described by a mixed distribution consisting of two Gaussian components. Therefore, this study presents the implementation of a double Gaussian source model into the BEAMnrc Monte Carlo code. The impact of the double Gaussian source model for a 6 MV beam is assessed through the comparison of different dosimetric parameters calculated using a single Gaussian source, previously com-missioned, the new double Gaussian source model and measurements, performed with a diode detector in a water phantom. It was found that the new source can be easily implemented into the BEAMnrc code and that it improves the agreement between measurements and simulations for small radiation fields. The impact of the change in source shape becomes less important as the field size increases and for increasing distance of the collimators to the source, as expected. In particular, for radiation fields delivered using stereotactic collimators located at a distance of 59 cm from the source, it was found that the effect of the double Gaussian source on the calculated dose distributions is negligible, even for radiation fields smaller than 5 mm in diameter. Accurate determination of the shape of the radiation source allows us to improve the Monte Carlo modeling of the linac, especially for treatment modalities such as IMRT, were the radiation beams used could be very narrow, becoming more sensitive to the shape of the source. PMID:27685141
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe
2015-01-01
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
NASA Astrophysics Data System (ADS)
A. O., Q.; Gardner, R. P.
1995-12-01
A new Monte Carlo method for modelling photon transport in the presence of deep-penetration and streaming effects by combining a subspace weight window and biasing schemes has been developed. This method is based on use of an importance map from which an importance subspace is identified for a given particle transport system. Biasing schemes, including direction biasing and the exponential transform, are applied to drive particles into the importance subspace. The subspace weight window approach used consists of splitting and Russian Roulette that acts as a particle weight stabilizer in the subspace to control weight fluctuations caused by the biasing schemes. This approach has been implemented in the optimization of the McLDL code, a specific purpose Monte Carlo code for modelling the spectral response of dual-spaced γ-γ litho-density logging tools. which are highly collimated, deep-penetration, three-dimensional, and low-yield photon transport systems. The McLDL code has been tested on a computational benchmark tool and benchmarked experimentally against laboratory test pit data for a commercial γ-γ litho-density logging tool (the Z-Densilog). The Monte Carlo Multiply Scattered Components (MCMSC) approach has been developed in conjunction with the McLDL code and Library Least-Squares (LLS) analysis. The MCMSC approach consists of constructing component libraries (1 4, 5 8 scatters, etc.) of γ-ray scattered spectra for a reference formation and borehole with the McLDL Monte Carlo code. Then the LLS approach is used with these library spectra to obtain empirical relationships between formation and borehole parameters and the component amounts. These, in turn, can be used to construct the spectra for samples with a range of formation and borehole parameters. This approach should significantly reduce the amount of experimental effort or extent of the Monte Carlo calculations necessary for complete logging tool calibration while maintaining a close physical
Monte Carlo Hybrid Applied to Binary Stochastic Mixtures
2008-08-11
The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We
NASA Astrophysics Data System (ADS)
Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.
2014-06-01
Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high
Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.
2006-07-01
Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)
Ramos, M; Ferrer, S; Verdu, G
2005-01-01
The authors present a review of tallying processes with non-Boltzmann tallies under Monte Carlo simulations. A comparison between different pulse and energy pulse height tallies has been done with MCNP5 code and PENEASY, a user-friendly version of PENELOPE code. Several simulations have been done for estimating the pulse and energy deposited spectra in a polymethyl-methacrilate (PMMA) phantom used during quality control testing in digital mammography. In the case of MCNP5, the arbitrary energy-loss which is activated by default for particles just crossing the detector has been removed for comparing the efficiency of the tally. PENEASY works similarly, counting all scores which have or have not deposited energy in the phantom. A correction has been done to the code to remove this scoring. As derived from the results, the deposited energy has been estimated as 3.73369e-3 MeV/particle for MCNP5 and 3.25468e-3 MeV/particle for PENASY. Further studies are necessary to obtain more accurate results modeling the compression plate and the imaging system. Pulse and energy pulse height spectra are still tallies under development and all effort must be done to understand the tallying process under different applications. PMID:17282861
Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE
Bekar, Kursat B; Celik, Cihangir; Wiarda, Dorothea; Peplow, Douglas E.; Rearden, Bradley T; Dunn, Michael E
2013-01-01
Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0
Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.
1996-10-01
Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-01-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477
Karalidi, Theodora; Apai, Dániel; Schneider, Glenn; Hanson, Jake R.; Pasachoff, Jay M.
2015-11-20
Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of a giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.
Monte Carlo simulation of Touschek effect.
Xiao, A.; Borland, M.; Accelerator Systems Division
2010-07-30
We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.
Recent advances and future prospects for Monte Carlo
Brown, Forrest B
2010-01-01
The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.
Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe; Bronk, Lawrence; Geng, Changran; Grosshans, David
2015-11-15
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Bobin, C; Thiam, C; Bouchard, J
2016-03-01
At LNE-LNHB, a liquid scintillation (LS) detection setup designed for Triple to Double Coincidence Ratio (TDCR) measurements is also used in the β-channel of a 4π(LS)β-γ coincidence system. This LS counter based on 3 photomultipliers was first modeled using the Monte Carlo code Geant4 to enable the simulation of optical photons produced by scintillation and Cerenkov effects. This stochastic modeling was especially designed for the calculation of double and triple coincidences between photomultipliers in TDCR measurements. In the present paper, this TDCR-Geant4 model is extended to 4π(LS)β-γ coincidence counting to enable the simulation of the efficiency-extrapolation technique by the addition of a γ-channel. This simulation tool aims at the prediction of systematic biases in activity determination due to eventual non-linearity of efficiency-extrapolation curves. First results are described in the case of the standardization (59)Fe. The variation of the γ-efficiency in the β-channel due to the Cerenkov emission is investigated in the case of the activity measurements of (54)Mn. The problem of the non-linearity between β-efficiencies is featured in the case of the efficiency tracing technique for the activity measurements of (14)C using (60)Co as a tracer.
NASA Astrophysics Data System (ADS)
Chao, T. C.; Xu, X. G.
2001-04-01
VIP-Man is a whole-body anatomical model newly developed at Rensselaer from the high-resolution colour images of the National Library of Medicine's Visible Human Project. This paper summarizes the use of VIP-Man and the Monte Carlo method to calculate specific absorbed fractions from internal electron emitters. A specially designed EGS4 user code, named EGS4-VLSI, was developed to use the extremely large number of image data contained in the VIP-Man. Monoenergetic and isotropic electron emitters with energies from 100 keV to 4 MeV are considered to be uniformly distributed in 26 organs. This paper presents, for the first time, results of internal electron exposures based on a realistic whole-body tomographic model. Because VIP-Man has many organs and tissues that were previously not well defined (or not available) in other models, the efforts at Rensselaer and elsewhere bring an unprecedented opportunity to significantly improve the internal dosimetry.
Lee, Y K
2005-01-01
TRIPOLI-4.3 Monte Carlo transport code has been used to evaluate the QUADOS (Quality Assurance of Computational Tools for Dosimetry) problem P4, neutron and photon response of an albedo-type thermoluminescence personal dosemeter (TLD) located on an ISO slab phantom. Two enriched 6LiF and two 7LiF TLD chips were used and they were protected, in front or behind, with a boron-loaded dosemeter-holder. Neutron response of the four chips was determined by counting 6Li(n,t)4He events using ENDF/B-VI.4 library and photon response by estimating absorbed dose (MeV g(-1)). Ten neutron energies from thermal to 20 MeV and six photon energies from 33 keV to 1.25 MeV were used to study the energy dependence. The fraction of the neutron and photon response owing to phantom backscatter has also been investigated. Detailed TRIPOLI-4.3 solutions are presented and compared with MCNP-4C calculations. PMID:16381740
Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.
El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H
2008-09-01
Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Isotropic Monte Carlo Grain Growth
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
Monte Carlo simulation of Alaska wolf survival
NASA Astrophysics Data System (ADS)
Feingold, S. J.
1996-02-01
Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
2008-02-29
Version 00 (1) Problems to be solved: MVP/GMVP II can solve eigenvalue and fixed-source problems. The multigroup code GMVP can solve forward and adjoint problems for neutron, photon and neutron-photon coupled transport. The continuous-energy code MVP can solve only the forward problems. Both codes can also perform time-dependent calculations. (2) Geometry description: MVP/GMVP employs combinatorial geometry to describe the calculation geometry. It describes spatial regions by the combination of the 3-dimensional objects (BODIes). Currently, themore » following objects (BODIes) can be used. - BODIes with linear surfaces : half space, parallelepiped, right parallelepiped, wedge, right hexagonal prism - BODIes with quadratic surface and linear surfaces : cylinder, sphere, truncated right cone, truncated elliptic cone, ellipsoid by rotation, general ellipsoid - Arbitrary quadratic surface and torus The rectangular and hexagonal lattice geometry can be used to describe the repeated geometry. Furthermore, the statistical geometry model is available to treat coated fuel particles or pebbles for high temperature reactors. (3) Particle sources: The various forms of energy-, angle-, space- and time-dependent distribution functions can be specified. See Abstract for more detail.« less
Carver, D; Kost, S; Pickens, D; Price, R; Stabin, M
2014-06-15
Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width of 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.
NASA Astrophysics Data System (ADS)
Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej
2016-08-01
The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.
Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.
Densmore, J. D.; Urbatsch, T. J.; Evans, T. M.; Buksas, M. W.
2005-01-01
Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.
Monte Carlo simulations on SIMD computer architectures
Burmester, C.P.; Gronsky, R.; Wille, L.T.
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
Monte Carlo Methods in the Physical Sciences
Kalos, M H
2007-06-06
I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Daures, J; Gouriou, J; Bordy, J M
2011-03-01
This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.
Monte Carlo techniques for analyzing deep penetration problems
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.
NASA Astrophysics Data System (ADS)
Verhaegen, Frank
2002-05-01
High atomic number (Z) heterogeneities in tissue exposed to photons with energies of up to about 1 MeV can cause significant dose perturbations in their immediate vicinity. The recently released Monte Carlo (MC) code EGSnrc (Kawrakow 2000a Med. Phys. 27 485-98) was used to investigate the dose perturbation of high-Z heterogeneities in tissue in kilovolt (kV) and 60Co photon beams. Simulations were performed of measurements with a dedicated thin-window parallel-plate ion chamber near a high-Z interface in a 60Co photon beam (Nilsson et al 1992 Med. Phys. 19 1413-21). Good agreement was obtained between simulations and measurements for a detailed set of experiments in which the thickness of the ion chamber window, the thickness of the air gap between ion chamber and heterogeneity, the depth of the ion chamber in polystyrene and the material of the interface was varied. The EGSnrc code offers several improvements in the electron and photon production and transport algorithms over the older EGS4/PRESTA code (Nelson et al 1985 Stanford Linear Accelerator Center Report SL AC-265, Bielajew and Rogers 1987 Nucl. Instrum. Methods Phys. Res. B 18 165-81). The influence of the new EGSnrc features was investigated for simulations of a planar slab of a high-Z medium embedded in water and exposed to kV or 60Co photons. It was found that using the new electron transport algorithm in EGSnrc, including relativistic spin effects in elastic scattering, significantly affects the calculation of dose distribution near high-Z interfaces. The simulations were found to be independent of the maximum fractional electron energy loss per step (ESTEPE), which was often a cause for concern in older EGS4 simulations. Concerning the new features of the photon transport algorithm sampling of the photoelectron angular distribution was found to have a significant effect, whereas the effect of binding energies in Compton scatter was found to be negligible. A slight dose artefact very close to high
Hajizadeh-Safar, M; Ghorbani, M; Khoshkharam, S; Ashrafi, Z
2014-07-01
Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc(99_m) activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using
Hajizadeh-Safar, M.; Ghorbani, M.; Khoshkharam, S.; Ashrafi, Z.
2014-01-01
Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc99_m activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using
Monte Carlo radiation transport¶llelism
Cox, L. J.; Post, S. E.
2002-01-01
This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.
Development of a Space Radiation Monte Carlo Computer Simulation
NASA Technical Reports Server (NTRS)
Pinsky, Lawrence S.
1997-01-01
The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.
Biju, K; Sunil, C; Sarkar, P K
2013-12-01
The FLUKA Monte Carlo simulations are carried out to estimate the (41)Ar concentration inside accelerator vaults of various sizes when proton beams of energy 0.1-1.0 GeV are incident on thick copper and lead targets. Generally (41)Ar concentration is estimated using an empirical formula suggested in the NCRP 144, which assumes the activation is caused only by thermal neutrons alone. It is found that while the analytical and Monte Carlo techniques give similar results for the thermal neutron fluence inside the vault, the (41)Ar concentration is under-predicted by the empirical formula. It is also found that the thermal neutrons contribute ∼41 % to the total (41)Ar production while 56 % production is caused by neutrons between 0.025 and 1 eV. A modified factor is suggested for the use in the empirical expression to estimate the (41)Ar activity 0.1-1.0-GeV proton accelerator enclosures.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Monte Carlo calculations of nuclei
Pieper, S.C.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
NASA Astrophysics Data System (ADS)
Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.
2016-07-01
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
Caruso, A.; Cherubini, S.; Spitaleri, C.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Crucillà, V.; Gulino, M.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; and others
2015-02-24
Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called 'narrow systems' because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of 'hot hydrogen burning' are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as {sup 13}N and {sup 18}F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of {sup 18}F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of {sup 18}F. Among these, the {sup 18}F(p,α){sup 15}O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the {sup 18}F(p,α){sup 15}O reaction, using a beam of {sup 18}F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code
NASA Astrophysics Data System (ADS)
Caruso, A.; Cherubini, S.; Spitaleri, C.; Crucillà, V.; Gulino, M.; La Cognata, M.; Lamia, L.; Rapisarda, G.; Romano, S.; Sergi, ML.; Kubono, S.; Yamaguchi, H.; Hayakawa, S.; Wakabayashi, Y.; Iwasa, N.; Kato, S.; Komatsubara, T.; Teranishi, T.; Coc, A.; Hammache, F.; de Séréville, N.
2015-02-01
Novae are astrophysical events (violent explosion) occurring in close binary systems consisting of a white dwarf and a main-sequence star or a star in a more advanced stage of evolution. They are called "narrow systems" because the two components interact with each other: there is a process of mass exchange with resulting in the transfer of matter from the companion star to the white dwarf, leading to the formation of this last of the so-called accretion disk, rich mainly of hydrogen. Over time, more and more material accumulates until the pressure and the temperature reached are sufficient to trigger nuclear fusion reactions, rapidly converting a large part of the hydrogen into heavier elements. The products of "hot hydrogen burning" are then placed in the interstellar medium as a result of violent explosions. Studies on the element abundances observed in these events can provide important information about the stages of evolution stellar. During the outbursts of novae some radioactive isotopes are synthesized: in particular, the decay of short-lived nuclei such as 13N and 18F with subsequent emission of gamma radiation energy below 511 keV. The gamma rays from products electron-positron annihilation of positrons emitted in the decay of 18F are the most abundant and the first observable as soon as the atmosphere of the nova starts to become transparent to gamma radiation. Hence the importance of the study of nuclear reactions that lead both to the formation and to the destruction of 18F . Among these, the 18F(p,α)15O reaction is one of the main channels of destruction. This reaction was then studied at energies of astrophysical interest. The experiment done at Riken, Japan, has as its objective the study of the 18F(p,α)15O reaction, using a beam of 18F produced at CRIB, to derive important information about the phenomenon of novae. In this paper we present the experimental technique and the Monte Carlo code developed to be used in the data analysis process.
NASA Astrophysics Data System (ADS)
Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.
2016-07-01
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%–0 mm and a 2%–0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%–0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
Monte Carlo tests of the ELIPGRID-PC algorithm
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.
Marcus, Ryan C.
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
Monte Carlo Experiments: Design and Implementation.
ERIC Educational Resources Information Center
Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian
2001-01-01
Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-06-15
Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.
Handley, G. R.; Masters, L. C.; Stachowiak, R. V.
1981-04-10
Validation of the Monte Carlo criticality code, KENO IV, and the Hansen-Roach sixteen-energy-group cross sections was accomplished by calculating the effective neutron multiplication constant, k/sub eff/, of 29 experimentally critical assemblies which had uranium enrichments of 92.6% or higher in the uranium-235 isotope. The experiments were chosen so that a large variety of geometries and of neutron energy spectra were covered. Problems, calculating the k/sub eff/ of systems with high-uranium-concentration uranyl nitrate solution that were minimally reflected or unreflected, resulted in the separate examination of five cases.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-12-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.
Shell model Monte Carlo methods
Koonin, S.E.; Dean, D.J.
1996-10-01
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra
Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor
2009-09-15
The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
An unbiased Hessian representation for Monte Carlo PDFs
NASA Astrophysics Data System (ADS)
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan
2015-08-01
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.
Vectorization of Monte Carlo particle transport
Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. . Computer Center; Los Alamos National Lab., NM; Supercomputing Research Center, Bowie, MD )
1989-01-01
Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.
Chemical application of diffusion quantum Monte Carlo
NASA Technical Reports Server (NTRS)
Reynolds, P. J.; Lester, W. A., Jr.
1984-01-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.
PEPSI — a Monte Carlo generator for polarized leptoproduction
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
NASA Astrophysics Data System (ADS)
Cochran, Thomas
2007-04-01
In 2002 and again in 2003, an investigative journalist unit at ABC News transported a 6.8 kilogram metallic slug of depleted uranium (DU) via shipping container from Istanbul, Turkey to Brooklyn, NY and from Jakarta, Indonesia to Long Beach, CA. Targeted inspection of these shipping containers by Department of Homeland Security (DHS) personnel, included the use of gamma-ray imaging, portal monitors and hand-held radiation detectors, did not uncover the hidden DU. Monte Carlo analysis of the gamma-ray intensity and spectrum of a DU slug and one consisting of highly-enriched uranium (HEU) showed that DU was a proper surrogate for testing the ability of DHS to detect the illicit transport of HEU. Our analysis using MCNP-5 illustrated the ease of fully shielding an HEU sample to avoid detection. The assembly of an Improvised Nuclear Device (IND) -- a crude atomic bomb -- from sub-critical pieces of HEU metal was then examined via Monte Carlo criticality calculations. Nuclear explosive yields of such an IND as a function of the speed of assembly of the sub-critical HEU components were derived. A comparison was made between the more rapid assembly of sub-critical pieces of HEU in the ``Little Boy'' (Hiroshima) weapon's gun barrel and gravity assembly (i.e., dropping one sub-critical piece of HEU on another from a specified height). Based on the difficulty of detection of HEU and the straightforward construction of an IND utilizing HEU, current U.S. government policy must be modified to more urgently prioritize elimination of and securing the global inventories of HEU.
Total Monte Carlo evaluation for dose calculations.
Sjöstrand, H; Alhassan, E; Conroy, S; Duan, J; Hellesen, C; Pomp, S; Österlund, M; Koning, A; Rochman, D
2014-10-01
Total Monte Carlo (TMC) is a method to propagate nuclear data (ND) uncertainties in transport codes, by using a large set of ND files, which covers the ND uncertainty. The transport code is run multiple times, each time with a unique ND file, and the result is a distribution of the investigated parameter, e.g. dose, where the width of the distribution is interpreted as the uncertainty due to ND. Until recently, this was computer intensive, but with a new development, fast TMC, more applications are accessible. The aim of this work is to test the fast TMC methodology on a dosimetry application and to propagate the (56)Fe uncertainties on the predictions of the dose outside a proposed 14-MeV neutron facility. The uncertainty was found to be 4.2 %. This can be considered small; however, this cannot be generalised to all dosimetry applications and so ND uncertainties should routinely be included in most dosimetry modelling.
NASA Astrophysics Data System (ADS)
Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.
2014-06-01
The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.
Monte Carlo shipping cask calculations using an automated biasing procedure
Tang, J.S.; Hoffman, T.J.; Childs, R.L.; Parks, C.V.
1983-01-01
This paper describes an automated biasing procedure for Monte Carlo shipping cask calculations within the SCALE system - a modular code system for Standardized Computer Analysis for Licensing Evaluation. The SCALE system was conceived and funded by the US Nuclear Regulatory Commission to satisfy a strong need for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems.
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Uncertainty Propagation with Fast Monte Carlo Techniques
NASA Astrophysics Data System (ADS)
Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.
2014-04-01
Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.
Monte Carlo radiation transport: A revolution in science
Hendricks, J.
1993-04-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.
Monte Carlo stratified source-sampling
Blomquist, R.N.; Gelbard, E.M.
1997-09-01
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo {open_quotes}eigenvalue of the world{close_quotes} problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress.
Monte Carlo calculation of monitor unit for electron arc therapy
Chow, James C. L.; Jiang Runqing
2010-04-15
Purpose: Monitor unit (MU) calculations for electron arc therapy were carried out using Monte Carlo simulations and verified by measurements. Variations in the dwell factor (DF), source-to-surface distance (SSD), and treatment arc angle ({alpha}) were studied. Moreover, the possibility of measuring the DF, which requires gantry rotation, using a solid water rectangular, instead of cylindrical, phantom was investigated. Methods: A phase space file based on the 9 MeV electron beam with rectangular cutout (physical size=2.6x21 cm{sup 2}) attached to the block tray holder of a Varian 21 EX linear accelerator (linac) was generated using the EGSnrc-based Monte Carlo code and verified by measurement. The relative output factor (ROF), SSD offset, and DF, needed in the MU calculation, were determined using measurements and Monte Carlo simulations. An ionization chamber, a radiographic film, a solid water rectangular phantom, and a cylindrical phantom made of polystyrene were used in dosimetry measurements. Results: Percentage deviations of ROF, SSD offset, and DF between measured and Monte Carlo results were 1.2%, 0.18%, and 1.5%, respectively. It was found that the DF decreased with an increase in {alpha}, and such a decrease in DF was more significant in the {alpha} range of 0 deg. - 60 deg. than 60 deg. - 120 deg. Moreover, for a fixed {alpha}, the DF increased with an increase in SSD. Comparing the DF determined using the rectangular and cylindrical phantom through measurements and Monte Carlo simulations, it was found that the DF determined by the rectangular phantom agreed well with that by the cylindrical one within {+-}1.2%. It shows that a simple setup of a solid water rectangular phantom was sufficient to replace the cylindrical phantom using our specific cutout to determine the DF associated with the electron arc. Conclusions: By verifying using dosimetry measurements, Monte Carlo simulations proved to be an alternative way to perform MU calculations effectively
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Monte Carlo Simulations for Radiobiology
NASA Astrophysics Data System (ADS)
Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward
2012-02-01
The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform Monte Carlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.
NASA Technical Reports Server (NTRS)
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
Clinical implementation of full Monte Carlo dose calculation in proton beam therapy.
Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn
2008-09-01
The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical Systems Inc
Monte Carlo study of TLD measurements in air cavities.
Haraldsson, Pia; Knöös, Tommy; Nyström, Håkan; Engström, Per
2003-09-21
Thermoluminescent dosimeters (TLDs) are used for verification of the delivered dose during IMRT treatment of head and neck carcinomas. The TLDs are put into a plastic tube, which is placed in the nasal cavities through the treated volume. In this study, the dose distribution to a phantom having a cylindrical air cavity containing a tube was calculated by Monte Carlo methods and the results were compared with data from a treatment planning system (TPS) to evaluate the accuracy of the TLD measurements. The phantom was defined in the DOSXYZnrc Monte Carlo code and calculations were performed with 6 MV fields, with the TLD tube placed at different positions within the cylindrical air cavity. A similar phantom was defined in the pencil beam based TPS. Differences between the Monte Carlo and the TPS calculations of the absorbed dose to the TLD tube were found to be small for an open symmetrical field. For a half-beam field through the air cavity, there was a larger discrepancy. Furthermore, dose profiles through the cylindrical air cavity show, as expected, that the treatment planning system overestimates the absorbed dose in the air cavity. This study shows that when using an open symmetrical field, Monte Carlo calculations of absorbed doses to a TLD tube in a cylindrical air cavity give results comparable to a pencil beam based treatment planning system.
Calibration and Monte Carlo modelling of neutron long counters
NASA Astrophysics Data System (ADS)
Tagziria, Hamid; Thomas, David J.
2000-10-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivity of the Monte Carlo calculations for the efficiency of the De Pangher long counter to perturbations in density and cross-section of the polyethylene used in the construction has been investigated.
Parallel Monte Carlo simulation of multilattice thin film growth
NASA Astrophysics Data System (ADS)
Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen
2001-07-01
This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments
NASA Astrophysics Data System (ADS)
Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob
2014-06-01
The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.
Monte Carlo and detector simulation in OOP (Object-Oriented Programming)
Atwood, W.B.; Blankenbecler, R.; Kunz, P. ); Burnett, T.; Storr, K.M. . ECP Div.)
1990-10-01
Object-Oriented Programming techniques are explored with an eye toward applications in High Energy Physics codes. Two prototype examples are given: McOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package).
The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis
Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.
2013-02-15
A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.
Monte Carlo Simulations on a 9-node PC Cluster
NASA Astrophysics Data System (ADS)
Gouriou, J.
Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods
Carter, L L; Lan, J S; Schwarz, R A
1991-01-01
This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.
Valentine, T.E.; Mihalczo, J.T.
1996-08-01
One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor.
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
D.P. Stotler
2005-06-09
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
2014-09-01
Version 01 MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP6.1.1Beta is a follow-on to the MCNP6.1 production version which itself was the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product. This MCNP6.1.1 beta has been released in order to provide the radiation transport community with the latest feature developmentsmore » and bug fixes in the code. MCNP6.1.1 has taken input from a group of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Radiation Transport Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5). They have combined their code development efforts to produce this next evolution of MCNP. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams.« less
2014-09-01
Version 01 MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP6.1.1Beta is a follow-on to the MCNP6.1 production version which itself was the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product. This MCNP6.1.1 beta has been released in order to provide the radiation transport community with the latest feature developments and bug fixes in the code. MCNP6.1.1 has taken input from a group of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Radiation Transport Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5). They have combined their code development efforts to produce this next evolution of MCNP. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams.
MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS
Roth, Nathaniel; Kasen, Daniel
2015-03-15
We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.
Mesh-based weight window approach for Monte Carlo simulation
Liu, L.; Gardner, R.P.
1997-12-01
The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback.
Bayesian Monte Carlo method for nuclear data evaluation
NASA Astrophysics Data System (ADS)
Koning, A. J.
2015-12-01
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
NASA Astrophysics Data System (ADS)
Aygun, Bünyamin; Korkut, Turgay; Karabulut, Abdulhalik
2016-05-01
Despite the possibility of depletion of fossil fuels increasing energy needs the use of radiation tends to increase. Recently the security-focused debate about planned nuclear power plants still continues. The objective of this thesis is to prevent the radiation spread from nuclear reactors into the environment. In order to do this, we produced higher performanced of new shielding materials which are high radiation holders in reactors operation. Some additives used in new shielding materials; some of iron (Fe), rhenium (Re), nickel (Ni), chromium (Cr), boron (B), copper (Cu), tungsten (W), tantalum (Ta), boron carbide (B4C). The results of this experiments indicated that these materials are good shields against gamma and neutrons. The powder metallurgy technique was used to produce new shielding materials. CERN - FLUKA Geant4 Monte Carlo simulation code and WinXCom were used for determination of the percentages of high temperature resistant and high-level fast neutron and gamma shielding materials participated components. Super alloys was produced and then the experimental fast neutron dose equivalent measurements and gamma radiation absorpsion of the new shielding materials were carried out. The produced products to be used safely reactors not only in nuclear medicine, in the treatment room, for the storage of nuclear waste, nuclear research laboratories, against cosmic radiation in space vehicles and has the qualities.
MCNP{trademark} Monte Carlo: A precis of MCNP
Adams, K.J.
1996-06-01
MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.
MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT
J. S. HENDRICK; G. W. MCKINNEY; L. J. COX
2000-01-01
The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Monte Carlo Methodology Serves Up a Software Success
NASA Technical Reports Server (NTRS)
2003-01-01
Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.
Monte Carlo modelling of positron transport in real world applications
NASA Astrophysics Data System (ADS)
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
Monte Carlo calculation of patient organ doses from computed tomography.
Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi
2014-01-01
In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.
CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc
Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.
2004-12-01
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Monte Carlo simulation of large electron fields.
Faddegon, Bruce A; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Monte Carlo simulation of large electron fields
Faddegon, Bruce A; Perl, Joseph; Asai, Makoto
2010-01-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775
Monte Carlo simulation of large electron fields
NASA Astrophysics Data System (ADS)
Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Interaction picture density matrix quantum Monte Carlo.
Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
Geodesic Monte Carlo on Embedded Manifolds.
Byrne, Simon; Girolami, Mark
2013-12-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
NASA Astrophysics Data System (ADS)
Ballarini, F.; Fluka-Phantoms Team
Astronauts' exposure to space radiation is of major concern for long-term missions, especially for those in deep space such as a possible mission to Mars. Shielding optimization is therefore a crucial issue, and simulations based on radiation transport codes coupled with anthropomorphic model phantoms can be of great help. In this work, carried out with the FLUKA MC code and two anthropomorphic phantoms (a mathematical model and a "voxel" model), distributions of physical (i.e. absorbed), equivalent and "biological" dose in the various tissues and organs were calculated in different shielding conditions for solar minimum and solar maximum GCR spectra, as well as for the August 1972 Solar Particle Event. The biological dose was modeled as the average number of "Complex Lesions" (CL) per cell in a given organ. CLs are clustered DNA breaks previously calculated with "event-by-event" track structure simulations and integrated in the condensed-history FLUKA code. This approach is peculiar in that it is an example of a mechanistically-based quantification of the ionizing radiation action in biological targets; indeed CLs have been shown to play a fundamental role in chromosome aberration induction. The contributions of primary particles and secondary hadrons were calculated separately, thus allowing quantification of the role of nuclear reactions in the shield and in the human body. As expected, the doses calculated for the 1972 SPE decrease dramatically with increasing the Al shielding; nuclear reactions were found to be of minor importance, although their role is higher for internal organs and large shielding. An Al shield thickness of 10 g/cm2 appears sufficient to respect the 30-day deterministic limits recommended by NCRP for missions in Low Earth Orbit. In contrast with the results obtained for SPE, GCR doses to internal organs are not significantly lower than skin doses. However, the relative contribution of secondary hadrons was found to be more important for
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
A Monte Carlo-based procedure for independent monitor unit calculation in IMRT treatment plans.
Pisaturo, O; Moeckli, R; Mirimanoff, R-O; Bochud, F O
2009-07-01
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Geometrically-compatible 3-D Monte Carlo and discrete-ordinates methods
Morel, J.E.; Wareing, T.A.; McGhee, J.M.; Evans, T.M.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The purpose of this project was two-fold. The first purpose was to develop a deterministic discrete-ordinates neutral-particle transport scheme for unstructured tetrahedral spatial meshes, and implement it in a computer code. The second purpose was to modify the MCNP Monte Carlo radiation transport code to use adjoint solutions from the tetrahedral-mesh discrete-ordinates code to reduce the statistical variance of Monte Carlo solutions via a weight-window approach. The first task has resulted in a deterministic transport code that is much more efficient for modeling complex 3-D geometries than any previously existing deterministic code. The second task has resulted in a powerful new capability for dramatically reducing the cost of difficult 3-D Monte Carlo calculations.
NASA Astrophysics Data System (ADS)
Oleynik, D. S.
2015-12-01
A new version of the tally module of the MCU software package is developed in which the approach for taking directly into account the uncertainty in initial data is implemented that is recommended by the international standard on estimating the uncertainty in results of measuring (ISO 13005). The new module makes it possible to evaluate the effect of uncertainty in initial data (caused by technological tolerances in fabrication of structural members of the core) on neutronic characteristics of the reactor. The developed software is adapted to parallel computing with the use of multiprocessor computers, which significantly reduces the computation time: the parallelization coefficient is almost equal to 1. Testing is performed by examples of solving the problem on criticality for the Godiva benchmark experiment and also for the infinite lattice of fuel assemblies of the VVER-440, VVER-1000, and VVER-1200. The results of calculations of the uncertainty in neutronic characteristics (effective multiplication factor, fission reaction rate), which is caused by uncertainties in initial data due to technological tolerances, are compared (in the first case) to the published results obtained using the precision MCNP5 code and (in the second case) to those obtained by means of the RADAR engineering program. A good agreement of results is achieved for all cases.
Oleynik, D. S.
2015-12-15
A new version of the tally module of the MCU software package is developed in which the approach for taking directly into account the uncertainty in initial data is implemented that is recommended by the international standard on estimating the uncertainty in results of measuring (ISO 13005). The new module makes it possible to evaluate the effect of uncertainty in initial data (caused by technological tolerances in fabrication of structural members of the core) on neutronic characteristics of the reactor. The developed software is adapted to parallel computing with the use of multiprocessor computers, which significantly reduces the computation time: the parallelization coefficient is almost equal to 1. Testing is performed by examples of solving the problem on criticality for the Godiva benchmark experiment and also for the infinite lattice of fuel assemblies of the VVER-440, VVER-1000, and VVER-1200. The results of calculations of the uncertainty in neutronic characteristics (effective multiplication factor, fission reaction rate), which is caused by uncertainties in initial data due to technological tolerances, are compared (in the first case) to the published results obtained using the precision MCNP5 code and (in the second case) to those obtained by means of the RADAR engineering program. A good agreement of results is achieved for all cases.
Monte Carlo simulations of lattice gauge theories
Rebbi, C
1980-02-01
Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)
Advances in Monte Carlo computer simulation
NASA Astrophysics Data System (ADS)
Swendsen, Robert H.
2011-03-01
Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
A comparison of Monte Carlo generators
Golan, Tomasz
2015-05-15
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems
NASA Astrophysics Data System (ADS)
Martin, W. R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
Advanced computational methods for nodal diffusion, Monte Carlo, and S[sub N] problems
Martin, W.R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. A alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.
2012-07-01
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)
Quantum Monte Carlo Endstation for Petascale Computing
David Ceperley
2011-03-02
CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular
Application of Direct Simulation Monte Carlo to Satellite Contamination Studies
NASA Technical Reports Server (NTRS)
Rault, Didier F. G.; Woronwicz, Michael S.
1995-01-01
A novel method is presented to estimate contaminant levels around spacecraft and satellites of arbitrarily complex geometry. The method uses a three-dimensional direct simulation Monte Carlo algorithm to characterize the contaminant cloud surrounding the space platform, and a computer-assisted design preprocessor to define the space-platform geometry. The method is applied to the Upper Atmosphere Research Satellite to estimate the contaminant flux incident on the optics of the halogen occultation experiment (HALOE) telescope. Results are presented in terms of contaminant cloud structure, molecular velocity distribution at HALOE aperture, and code performance.
Experimental validation of plutonium ageing by Monte Carlo correlated sampling
Litaize, O.; Bernard, D.; Santamarina, A.
2006-07-01
Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)
Neutronic calculations for CANDU thorium systems using Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Saldideh, M.; Shayesteh, M.; Eshghi, M.
2014-08-01
In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k∞, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission
NASA Technical Reports Server (NTRS)
Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Electron transport in radiotherapy using local-to-global Monte Carlo
Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.; Neuenschwander, H.; Mackie, T.R.; Reckwerdt, P.J.
1994-09-01
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ``steps`` to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.
Ahmad, I.; Back, B.B.; Betts, R.R.
1995-08-01
An essential component in the assessment of the significance of the results from APEX is a demonstrated understanding of the acceptance and response of the apparatus. This requires detailed simulations which can be compared to the results of various source and in-beam measurements. These simulations were carried out using the computer codes EGS and GEANT, both specifically designed for this purpose. As far as is possible, all details of the geometry of APEX were included. We compared the results of these simulations with measurements using electron conversion sources, positron sources and pair sources. The overall agreement is quite acceptable and some of the details are still being worked on. The simulation codes were also used to compare the results of measurements of in-beam positron and conversion electrons with expectations based on known physics or other methods. Again, satisfactory agreement is achieved. We are currently working on the simulation of various pair-producing scenarios such as the decay of a neutral object in the mass range 1.5-2.0 MeV and also the emission of internal pairs from nuclear transitions in the colliding ions. These results are essential input to the final results from APEX on cross section limits for various, previously proposed, sharp-line producing scenarios.
Monte Carlo track structure for radiation biology and space applications
NASA Technical Reports Server (NTRS)
Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.
2001-01-01
Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.
Interaction picture density matrix quantum Monte Carlo.
Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.
1980-05-01
Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Monte Carlo simulations of fluid vesicles.
Sreeja, K K; Ipsen, John H; Sunil Kumar, P B
2015-07-15
Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations. PMID:26087479
Monte Carlo simulations of fluid vesicles
NASA Astrophysics Data System (ADS)
Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil
2015-07-01
Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.
Monte Carlo modeling of exospheric bodies - Mercury
NASA Technical Reports Server (NTRS)
Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.
1978-01-01
In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.
Monte Carlo Particle Transport: Algorithm and Performance Overview
Gentile, N; Procassini, R; Scott, H
2005-06-02
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.
Quantum Monte Carlo with known sign structures
NASA Astrophysics Data System (ADS)
Nilsson, Johan
We investigate the merits of different Hubbard-Stratonovich transformations (including fermionic ones) for the description of interacting fermion systems, focusing on the single band Hubbard model as a model system. In particular we revisit an old proposal of Batrouni and Forcrand (PRB 48, 589 1993) for determinant quantum Monte Carlo simulations, in which the signs of all configurations is known beforehand. We will discuss different ways that this knowledge can be used to make more accurate predictions and simulations.
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
Monte Carlo Generators for the LHC
NASA Astrophysics Data System (ADS)
Worek, M.
2007-11-01
The status of two Monte Carlo generators, HELAC-PHEGAS, a program for multi-jet processes and VBFNLO, a parton level program for vector boson fusion processes at NLO QCD, is briefly presented. The aim of these tools is the simulation of events within the Standard Model at current and future high energy experiments, in particular the LHC. Some results related to the production of multi-jet final states at the LHC are also shown.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
The Specific Bias in Dynamic Monte Carlo Simulations of Nuclear Reactor
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihisa; Endo, Hiroshi; Ishizu, Tomoko; Tatewaki, Isao
2014-06-01
During the development of Monte-Carlo-based dynamic code system, we have encountered two major Monte-Carlo-specific problems. One is the break down due to "false super-criticality" which is caused by an accidentally large eigenvalue due to statistical error in spite of the fact that the reactor is actually not. The other problem, which is the main topic in this paper, is that the statistical error in power level using the reactivity calculated with Monte Carlo code is not symmetric about its mean but always positively biased. This signifies that the bias is accumulated as the calculation proceeds and consequently results in over-estimation of the final power level. It should be noted that the bias will not eliminated by refining time step as long as the variance is not zero. A preliminary investigation on this matter using the one-group-precursor point kinetic equations was made and it was concluded that the bias in power level is approximately proportional to the product of variance in Monte Carlo calculation and elapsed time. This conclusion was verified with some numerical experiments. This outcome is important in quantifying the required precision of the Monte-Carlo-based reactivity calculations.
Monte Carlo small-sample perturbation calculations
Feldman, U.; Gelbard, E.; Blomquist, R.
1983-01-01
Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically.
jTracker and Monte Carlo Comparison
NASA Astrophysics Data System (ADS)
Selensky, Lauren; SeaQuest/E906 Collaboration
2015-10-01
SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
A practical Monte Carlo MU verification tool for IMRT quality assurance
NASA Astrophysics Data System (ADS)
Fan, J.; Li, J.; Chen, L.; Stathakis, S.; Luo, W.; Du Plessis, F.; Xiong, W.; Yang, J.; Ma, C.-M.
2006-05-01
Quality assurance (QA) for intensity-modulated radiation therapy (IMRT) treatment planning and beam delivery, using ionization chamber measurements and film dosimetry in a phantom, is time consuming. The Monte Carlo method is the most accurate method for radiotherapy dose calculation. However, a major drawback of Monte Carlo dose calculation as currently implemented is its slow speed. The goal of this work is to bring the efficiency of Monte Carlo into a practical range by developing a fast Monte Carlo monitor unit (MU) verification tool for IMRT. A special estimator for dose at a point called the point detector has been used in this research. The point detector uses the next event estimation (NEE) method to calculate the photon energy fluence at a point of interest and then converts it to collision kerma by the mass energy absorption coefficient assuming the presence of transient charged particle equilibrium. The MU verification tool has been validated by comparing the calculation results with measurements. It can be used for both patient dose verification and phantom QA calculation. The dynamic leaf-sequence log file is used to rebuild the actual MLC leaf sequence in order to predict the dose actually received by the patient. Dose calculations for 20 patient plans have been performed using the point detector method. Results were compared with direct Monte Carlo simulations using EGS4/MCSIM, which is a well-benchmarked Monte Carlo code. The results between the point detector and MCSIM agreed to within 2%. A factor of 20 speedup can be achieved with the point detector method compared with direct Monte Carlo simulations.
Neutron streaming through shield ducts using a discrete ordinates/Monte Carlo method
Urban, W.T.; Baker, R.S.
1993-08-18
A common problem in shield design is determining the neutron flux that streams through ducts in shields and also that penetrates the shield after having traveled partway down the duct. Obviously the determination of the neutrons that stream down the duct can be computed in a straightforward manner using Monte Carlo techniques. On the other hand those neutrons that must penetrate a significant portion of the shield are more easily handled using discrete ordinates methods. A hybrid discrete ordinates/Monte Carlo cods, TWODANT/MC, which is an extension of the existing discrete ordinates code TWODANT, has been developed at Los Alamos to allow the efficient, accurate treatment of both streaming and deep penetration problems in a single calculation. In this paper we provide examples of the application of TWODANT/MC to typical geometries that are encountered in shield design and compare the results with those obtained using the Los Alamos Monte Carlo code MCNP{sup 3}.
Monte Carlo applications at Hanford Engineering Development Laboratory
Carter, L.L.; Morford, R.J.; Wilcox, A.D.
1980-03-01
Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here
Sechopoulos, Ioannis; Ali, Elsayed S M; Badal, Andreu; Badano, Aldo; Boone, John M; Kyprianou, Iacovos S; Mainegra-Hing, Ernesto; McMillan, Kyle L; McNitt-Gray, Michael F; Rogers, D W O; Samei, Ehsan; Turner, Adam C
2015-10-01
The use of Monte Carlo simulations in diagnostic medical imaging research is widespread due to its flexibility and ability to estimate quantities that are challenging to measure empirically. However, any new Monte Carlo simulation code needs to be validated before it can be used reliably. The type and degree of validation required depends on the goals of the research project, but, typically, such validation involves either comparison of simulation results to physical measurements or to previously published results obtained with established Monte Carlo codes. The former is complicated due to nuances of experimental conditions and uncertainty, while the latter is challenging due to typical graphical presentation and lack of simulation details in previous publications. In addition, entering the field of Monte Carlo simulations in general involves a steep learning curve. It is not a simple task to learn how to program and interpret a Monte Carlo simulation, even when using one of the publicly available code packages. This Task Group report provides a common reference for benchmarking Monte Carlo simulations across a range of Monte Carlo codes and simulation scenarios. In the report, all simulation conditions are provided for six different Monte Carlo simulation cases that involve common x-ray based imaging research areas. The results obtained for the six cases using four publicly available Monte Carlo software packages are included in tabular form. In addition to a full description of all simulation conditions and results, a discussion and comparison of results among the Monte Carlo packages and the lessons learned during the compilation of these results are included. This abridged version of the report includes only an introductory description of the six cases and a brief example of the results of one of the cases. This work provides an investigator the necessary information to benchmark his/her Monte Carlo simulation software against the reference cases included here
Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )
1991-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
NASA Astrophysics Data System (ADS)
Díez, A.; Largo, J.; Solana, J. R.
2006-08-01
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
Treatment planning for a small animal using Monte Carlo simulation
Chow, James C. L.; Leung, Michael K. K.
2007-12-15
The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.
Improved criticality convergence via a modified Monte Carlo iteration method
Booth, Thomas E; Gubernatis, James E
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina
NASA Astrophysics Data System (ADS)
Chen, Xiaoyan; Lane, Stephen
2010-02-01
We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.
Monte Carlo parameter studies and uncertainty analyses with MCNP5
Brown, F. B.; Sweezy, J. E.; Hayes, R. B.
2004-01-01
A software tool called mcnp-pstudy has been developed to automate the setup, execution, and collection of results from a series of MCNPS Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNPS jobs must be performed with varying problem input specifications. Monte Carlo codes are being used for a wide variety of applications today due to their accurate physical modeling and the speed of today's computers. In most applications for design work, experiment analysis, and benchmark calculations, it is common to run many calculations, not just one, to examine the effects of design tolerances, experimental uncertainties, or variations in modeling features. We have developed a software tool for use with MCNP5 to automate this process. The tool, mcnp-pstudy, is used to automate the operations of preparing a series of MCNP5 input files, running the calculations, and collecting the results. Using this tool, parameter studies, total uncertainty analyses, or repeated (possibly parallel) calculations with MCNP5 can be performed easily. Essentially no extra user setup time is required beyond that of preparing a single MCNP5 input file.
General purpose dynamic Monte Carlo with continuous energy for transient analysis
Sjenitzer, B. L.; Hoogenboom, J. E.
2012-07-01
For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)
Analytic Monte Carlo score distributions for future statistical confidence interval studies
Booth, T.E. )
1992-10-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution's tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. The analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem are provided in this paper.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Booth, T.E.
1992-03-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution's tail. This paper provides the analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem. It is shown that the large score tails of the two distributions behave very differently. In particular, the exponential transform is shown to have an infinite variance for some parameter choices.
Booth, T.E.
1992-03-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution`s tail. This paper provides the analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem. It is shown that the large score tails of the two distributions behave very differently. In particular, the exponential transform is shown to have an infinite variance for some parameter choices.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo.
Garcia Cardona, Cristina; Webb, Edmund Blackburn, III; Wagner, Gregory John; Tikare, Veena; Holm, Elizabeth Ann; Plimpton, Steven James; Thompson, Aidan Patrick; Slepoy, Alexander; Zhou, Xiao Wang; Battaile, Corbett Chandler; Chandross, Michael Evan
2009-10-01
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
Gamma irradiation of cultural artifacts for disinfection using Monte Carlo simulations.
Choi, Jong-il; Yoon, Minchul; Kim, Dongho
2012-11-01
In this study, it has been investigated the disinfection of Korean cultural artifacts by gamma irradiation, simulating the absorbed dose distribution on the object with the Monte Carlo methodology. Fungal contamination was identified on two traditional Korean agricultural tools, Hongdukkae and Holtae, which had been stored in a museum. Nine primary species were identified from these items: Bjerkandera adusta, Dothideomycetes sp., Penicillium sp., Cladosporium tenuissimum, Aspergillus versicolor, Penicillium sp., Entrophospora sp., Aspergillus sydowii, and Corynascus sepedonium. However, these fungi were completely inactivated by gamma irradiation at an absorbed dose of 20kGy on the front side. Monte Carlo N Particle Transport Code was used to simulate the doses applied to these cultural artifacts, and the measured dose distributions were well predicted by the simulations. These results show that irradiation is effective for the disinfection of cultural artifacts and that dose distribution can be predicted with Monte Carlo simulations, allowing the optimization of the radiation treatment.
Domain decomposition methods for parallel laser-tissue models with Monte Carlo transport
Alme, H.J.; Rodrique, G.; Zimmerman, G.
1998-10-19
Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.
Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX
Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim
2006-07-01
As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code
NASA Astrophysics Data System (ADS)
Salimi, E.; Rahighi, J.; Sardari, D.; Mahdavi, S. R.; Lamehi Rachti, M.
2014-12-01
Gas bremsstrahlung is generated in high energy electron storage rings through interaction of the electron beam with the residual gas molecules in vacuum chamber. In this paper, Monte Carlo calculation has been performed to evaluate radiation hazard due to gas bremsstrahlung in the Iranian Light Source Facility (ILSF) insertion devices. Shutter/stopper dimensions is determined and dose rate from the photoneutrons via the giant resonance photonuclear reaction which takes place inside the shutter/stopper is also obtained. Some other characteristics of gas bremsstrahlung such as photon fluence, energy spectrum, angular distribution and equivalent dose in tissue equivalent phantom have also been investigated by FLUKA Monte Carlo code.
Monte Carlo simulation of radiation streaming from a radioactive material shipping cask
Liu, Y.Y.; Schwarz, R.A.; Tang, J.S.
1996-04-01
Simulated detection of gamma radiation streaming from a radioactive material shipping cask have been performed with the Monte Carlo codes MCNP4A and MORSE-SGC/S. Despite inherent difficulties in simulating deep penetration of radiation and streaming, the simulations have yielded results that agree within one order of magnitude with the radiation survey data, with reasonable statistics. These simulations have also provided insight into modeling radiation detection, notably on location and orientation of the radiation detector with respect to photon streaming paths, and on techniques used to reduce variance in the Monte Carlo calculations. 13 refs., 4 figs., 2 tabs.
NASA Astrophysics Data System (ADS)
Tabary, J.; Glière, A.
A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Quantum Monte Carlo for vibrating molecules
Brown, W.R. |
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
Monte Carlo ICRH simulations in fully shaped anisotropic plasmas
Jucker, M.; Graves, J. P.; Cooper, W. A.; Mellet, N.; Brunner, S.
2008-11-01
In order to numerically study the effects of Ion Cyclotron Resonant Heating (ICRH) on the fast particle distribution function in general plasma geometries, three codes have been coupled: VMEC generates a general (2D or 3D) MHD equilibrium including full shaping and pressure anisotropy. This equilibrium is then mapped into Boozer coordinates. The full-wave code LEMan then calculates the power deposition and electromagnetic field strength of a wave field generated by a chosen antenna using a warm model. Finally, the single particle Hamiltonian code VENUS combines the outputs of the two previous codes in order to calculate the evolution of the distribution function. Within VENUS, Monte Carlo operators for Coulomb collisions of the fast particles with the background plasma have been implemented, accounting for pitch angle and energy scattering. Also, ICRH is simulated using Monte Carlo operators on the Doppler shifted resonant layer. The latter operators act in velocity space and induce a change of perpendicular and parallel velocity depending on the electric field strength and the corresponding wave vector. Eventually, the change in the distribution function will then be fed into VMEC for generating a new equilibrium and thus a self-consistent solution can be found. This model is an enhancement of previous studies in that it is able to include full 3D effects such as magnetic ripple, treat the effects of non-zero orbit width consistently and include the generation and effects of pressure anisotropy. Here, first results of coupling the three codes will be shown in 2D tokamak geometries.
Discovering correlated fermions using quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Wagner, Lucas K.; Ceperley, David M.
2016-09-01
It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.
Monte Carlo procedure for protein design
NASA Astrophysics Data System (ADS)
Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik
1998-11-01
A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Monte Carlo algorithm for free energy calculation.
Bi, Sheng; Tong, Ning-Hua
2015-07-01
We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.
Quantum Monte Carlo calculations for light nuclei.
Wiringa, R. B.
1998-10-23
Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Kinetic Monte Carlo simulations of proton conductivity
NASA Astrophysics Data System (ADS)
Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.
2014-07-01
The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.
Discovering correlated fermions using quantum Monte Carlo.
Wagner, Lucas K; Ceperley, David M
2016-09-01
It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
NASA Astrophysics Data System (ADS)
Bauge, E.
2015-01-01
The "Full model" evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the "Full model" evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
Quantum Monte Carlo : not just for energy levels.
Nollett, K. M.; Physics
2007-01-01
Quantum Monte Carlo and realistic interactions can provide well-motivated vertices and overlaps for DWBA analyses of reactions. Given an interaction in vaccum, there are several computational approaches to nuclear systems, as you have been hearing: No-core shell model with Lee-Suzuki or Bloch-Horowitz for Hamiltonian Coupled clusters with G-matrix interaction Density functional theory, granted an energy functional derived from the interaction Quantum Monte Carlo - Variational Monte Carlo Green's function Monte Carlo. The last two work directly with a bare interaction and bare operators and describe the wave function without expanding in basis functions, so they have rather different sets of advantages and disadvantages from the others. Variational Monte Carlo (VMC) is built on a sophisticated Ansatz for the wave function, built on shell model like structure modified by operator correlations. Green's function Monte Carlo (GFMC) uses an operator method to project the true ground state out of a reasonable guess wave function.
Self-Consistent Monte Carlo Simulations of Positive Discharges
NASA Astrophysics Data System (ADS)
Kortshagen, Uwe; Lawler, James E.
1999-10-01
Fully converged simulations of positive column discharges using single electron or ``direct simulation'' Monte Carlo codes were reported at GEC98. Initial solutions at low RxN (product of column radius and gas density) were found using only personal computers. Solutions to higher RxN, corresponding to an ion mean-free-path of 1/4 the column radius, have now been found using a supercomputer. Sixteen converged simulations, reaching a Debye length of 1/17 the column radius, are available [1]. The simulations illustrate sheath formation and the negative dynamic resistance of the positive column at low currents. The simulation results have been reproduced using entirely independent codes. No fluid approximations or plasma-sheath boundary conditions are used. The simulations are valuable for comparison to other types of fluid and kinetic theory models. [0em] [1] J. E. Lawler and U. Kortshagen, J. Phys. D: Appl. Phys., submitted.
Direct Simulation Monte Carlo (DSMC) on the Connection Machine
Wong, B.C.; Long, L.N. )
1992-01-01
The massively parallel computer Connection Machine is utilized to map an improved version of the direct simulation Monte Carlo (DSMC) method for solving flows with the Boltzmann equation. The kinetic theory is required for analyzing hypersonic aerospace applications, and the features and capabilities of the DSMC particle-simulation technique are discussed. The DSMC is shown to be inherently massively parallel and data parallel, and the algorithm is based on molecule movements, cross-referencing their locations, locating collisions within cells, and sampling macroscopic quantities in each cell. The serial DSMC code is compared to the present parallel DSMC code, and timing results show that the speedup of the parallel version is approximately linear. The correct physics can be resolved from the results of the complete DSMC method implemented on the connection machine using the data-parallel approach. 41 refs.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Goluoglu, Sedat; Bekar, Kursat B; Wiarda, Dorothea
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Monte Carlo estimates of edge particle sources in TJ-II plasmas
NASA Astrophysics Data System (ADS)
López-Bruna, D.; Popov, Tsv; de la Cal, E.
2016-03-01
Three-dimensional calculations of the electron source in plasmas of the TJ-II stellarator (Madrid, Spain) are performed using the Monte Carlo code EIRENE. When possible, the results are compared with diagnostic measurements in equivalent coordinates. Examples are shown for the Hα light evolution during a plasma collapse, CX fluxes, neutrals distributions along diagnostic chords and line radiation emissivities.
Teacher's Corner: Using SAS for Monte Carlo Simulation Research in SEM
ERIC Educational Resources Information Center
Fan, Xitao; Fan, Xiaotao
2005-01-01
This article illustrates the use of the SAS system for Monte Carlo simulation work in structural equation modeling (SEM). Data generation procedures for both multivariate normal and nonnormal conditions are discussed, and relevant SAS codes for implementing these procedures are presented. A hypothetical example is presented in which Monte Carlo…
Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods
Kiedrowski, Brian C; Brown, Forrest B
2010-01-01
Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.
Neutron transport calculations using Quasi-Monte Carlo methods
Moskowitz, B.S.
1997-07-01
This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Monte Carlo simulation of ICRF discharge initiation in ITER
NASA Astrophysics Data System (ADS)
Tripský, M.; Wauters, T.; Lyssoivan, A.; Křivská, A.; Louche, F.; Van Schoor, M.; Noterdaeme, J.-M.
2015-12-01
Discharges produced and sustained by ion cyclotron range of frequency (ICRF) waves in absence of plasma current will be used on ITER for (ion cyclotron-) wall conditioning (ICWC). The here presented simulations aim at ensuring that the ITER ICRH&CD system can be safely employed for ICWC and at finding optimal parameters to initiate the plasma. The 1D Monte Carlo code RFdinity1D3V was developed to simulate ICRF discharge initiation. The code traces the electron motion along one toroidal magnetic field line, accelerated by the RF field in front of the ICRF antenna. Electron collisions in the calculations are handled by a Monte Carlo procedure taking into account their energies and the related electron collision cross sections for collisions with H2, H2+ and H+. The code also includes Coulomb collisions between electrons and ions (e - e, e - H2+ , e - H+). We study the electron multiplication rate as a function of the RF discharge parameters (i) antenna input power (0.1-5MW), and (ii) the neutral pressure (H2) for two antenna phasing (monopole [0000]-phasing and small dipole [0π0π]-phasing). Furthermore, we investigate the electron multiplication rate dependency on the distance from the antenna straps. This radial dependency results from the decreasing electric amplitude and field smoothening with increasing distance from the antenna straps. The numerical plasma breakdown definition used in the code corresponds to the moment when a critical electron density nec for the low hybrid resonance (ω = ωLHR) is reached. This numerical definition was previously found in qualitative agreement with experimental breakdown times obtained from the literature and from experiments on the ASDEX Upgrade and TEXTOR.
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-04-25
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Quantum Monte Carlo for atoms and molecules
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.
Experimental Monte Carlo Quantum Process Certification
NASA Astrophysics Data System (ADS)
Steffen, Lars; Fedorov, Arkady; Baur, Matthias; Palmer da Silva, Marcus; Wallraff, Andreas
2012-02-01
Experimental implementations of quantum information processing have now reached a state, at which quantum process tomography starts to become impractical, since the number of experimental settings as well as the computational cost of the post processing required to extract the process matrix from the measurements scales exponentially with the number of qubits in the system. In order to determine the fidelity of an implemented process relative to the ideal one, a more practical approach called Monte Carlo quantum process certification was proposed in Ref. [1]. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup. Our system is realized with three superconducting transmon qubits coupled to a coplanar microwave resonator which is used for the joint-readout of the qubit states. We demonstrate an implementation of Monte Carlo quantum process certification and determine the fidelity of different two- and three-qubit gates such as cphase-, cnot-, 2cphase- and Toffoli-gates. The obtained results are compared with the values obtained from conventional process tomography and the errors of the obtained fidelities are determined. [4pt] [1] M. P. da Silva, O. Landon-Cardinal and D. Poulin, arXiv:1104.3835(2011)
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Quantum Monte Carlo methods for nuclear physics
Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.
2014-10-19
Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
Quantum Monte Carlo methods for nuclear physics
Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.
2015-09-09
Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.