Condensed history Monte Carlo methods for photon transport problems
Bhan, Katherine; Spanier, Jerome
2007-01-01
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions. PMID:18548128
Modelling photon transport in non-uniform media for SPECT with a vectorized Monte Carlo code.
Smith, M F
1993-10-01
A vectorized Monte Carlo code has been developed for modelling photon transport in non-uniform media for single-photon-emission computed tomography (SPECT). The code is designed to compute photon detection kernels, which are used to build system matrices for simulating SPECT projection data acquisition and for use in matrix-based image reconstruction. Non-uniform attenuating and scattering regions are constructed from simple three-dimensional geometric shapes, in which the density and mass attenuation coefficients are individually specified. On a Stellar GS1000 computer, Monte Carlo simulations are performed between 1.6 and 2.0 times faster when the vector processor is utilized than when computations are performed in scalar mode. Projection data acquired with a clinical SPECT gamma camera for a line source in a non-uniform thorax phantom are well modelled by Monte Carlo simulations. The vectorized Monte Carlo code was used to stimulate a 99Tcm SPECT myocardial perfusion study, and compensations for non-uniform attenuation and the detection of scattered photons improve activity estimation. The speed increase due to vectorization makes Monte Carlo simulation more attractive as a tool for modelling photon transport in non-uniform media for SPECT. PMID:8248288
Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques
2014-12-01
Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random numbermore » and for measuring the time of simulation.« less
Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques
2014-12-01
Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random number and for measuring the time of simulation.
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
Monte Carlo photon transport on vector and parallel supercomputers: Final report
Martin, W.R.; Nowak, P.F.
1986-12-01
The University of Michigan has been investigating the implementation of vectorized and parallelized Monte Carlo algorithms for the analysis of photon transport in an inertially-confined fusion (ICF) plasma. The goal of this work is to develop and test Monte Carlo algorithms for vector/parallel supercomputers such as the Cray X-MP and Cray-2. Previous effort has resulted in the development of a vectorized photon transport code, named VPHOT, and a companion scalar code, named SPHOT, that performs the same analysis and is used for comparative purposes to assess the performance of the vectorized algorithm. A test problem, denoted the ICF test problem, has been created and tested with the VPHOT and SPHOT codes. By comparison with a reference LLNL calculation of the ICF test problem, the VPHOT/SPHOT codes have been verified to predict the correct results. Performance results with VPHOT versus SPHOT and the reference LLNL code have been reported previously and indicate that speedups in the range of 6 to 12 can be achieved with the vectorized algorithm versus the conventional scalar algorithm on the Cray X-MP. This report summarizes the progress made during the last year to continue the investigation of vectorized Monte Carlo (parameter studies, alternative vectorized algorithm, alternative target machines) and to extend the work into the area of parallel processing. 5 refs.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
MCNP: a general Monte Carlo code for neutron and photon transport
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
Comparison of Monte Carlo collimator transport methods for photon treatment planning in radiotherapy
Schmidhalter, D.; Manser, P.; Frei, D.; Volken, W.; Fix, M. K.
2010-02-15
Purpose: The aim of this work was a Monte Carlo (MC) based investigation of the impact of different radiation transport methods in collimators of a linear accelerator on photon beam characteristics, dose distributions, and efficiency. Thereby it is investigated if it is possible to use different simplifications in the radiation transport for some clinical situations in order to save calculation time. Methods: Within the Swiss Monte Carlo Plan, a GUI-based framework for photon MC treatment planning, different MC methods are available for the radiation transport through the collimators [secondary jaws and multileaf collimator (MLC)]: EGSnrc (reference), VMC++, and Pin (an in-house developed MC code). Additional nonfull transport methods were implemented in order to provide different complexity levels for the MC simulation: Considering collimator attenuation only, considering Compton scatter only or just the firstCompton process, and considering the collimators as totally absorbing. Furthermore, either a simple or an exact geometry of the collimators can be selected for the absorbing or attenuation method. Phasespaces directly above and dose distributions in a water phantom are analyzed for academic and clinical treatment fields using 6 and 15 MV beams, including intensity modulated radiation therapy with dynamic MLC. Results: For all MC transport methods, differences in the radial mean energy and radial energy fluence are within 1% inside the geometric field. Below the collimators, the energy fluence is underestimated for nonfull MC transport methods ranging from 5% for Compton to 100% for Absorbing. Gamma analysis using EGSnrc calculated doses as reference shows that the percentage of voxels fulfilling a 1% /1 mm criterion is at least 98% when using VMC++, Compton, or firstCompton transport methods. When using the methods Pin, Transmission, Flat-Transmission, Flat-Absorbing or Absorbing, the mean value of points fulfilling this criterion over all tested cases is 97
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
Chi, Y; Tian, Z; Jiang, S; Jia, X
2015-06-15
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Space applications of the MITS electron-photon Monte Carlo transport code system
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M.
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
2013-06-24
Version 07 TART2012 is a coupled neutron-photon Monte Carlo transport code designed to use three-dimensional (3-D) combinatorial geometry. Neutron and/or photon sources as well as neutron induced photon production can be tracked. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART2012 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared tomore » other similar codes. Use of the entire system can save you a great deal of time and energy. TART2012 extends the general utility of the code to even more areas of application than available in previous releases by concentrating on improving the physics, particularly with regard to improved treatment of neutron fission, resonance self-shielding, molecular binding, and extending input options used by the code. Several utilities are included for creating input files and displaying TART results and data. TART2012 uses the latest ENDF/B-VI, Release 8, data. New for TART2012 is the use of continuous energy neutron cross sections, in addition to its traditional multigroup cross sections. For neutron interaction, the data are derived using ENDF-ENDL2005 and include both continuous energy cross sections and 700 group neutron data derived using a combination of ENDF/B-VI, Release 8, and ENDL data. The 700 group structure extends from 10-5 eV up to 1 GeV. Presently nuclear data are only available up to 20 MeV, so that only 616 of the groups are currently used. For photon interaction, 701 point photon data were derived using the Livermore EPDL97 file. The new 701 point structure extends from 100 eV up to 1 GeV, and is currently used over this entire energy range. TART2012 completely supersedes all older versions of TART, and it is strongly recommended that one use only the most recent version of TART2012 and its data files. Check authors homepage for related information: http
A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code
1998-06-12
TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering,more » photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.« less
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects onemore » of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
VALDEZ, GREG D.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie
2010-01-01
During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705
Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport.
Jia, Xun; Gu, Xuejun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B
2010-06-01
Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor. PMID:20463376
Ryman, J.C.; Eckerman, K.F.; Shultis, J.K.; Faw, R.E.; Dillman, L.T.
1996-04-01
Federal Guidance Report No. 12 tabulates dose coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, water, and soil. Although the dose coefficients of this report are based on previously developed dosimetric methodologies, they are derived from new, detailed calculations of energy and angular distributions of the radiations incident on the body and the transport of these radiations within the body. Effort was devoted to expanding the information available for assessment of radiation dose from radionuclides distributed on or below the surface of the ground. A companion paper (External Exposure to Radionuclides in Air, Water, and Soil) discusses the significance of the new tabulations of coefficients and provides detiled comparisons to previously published values. This paper discusses details of the photon transport calculations.
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
Badal, Andreu; Badano, Aldo
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz
2004-02-01
External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Electron-photon transport using the EGS4 (Electron Gamma Shower) Monte Carlo Code
Nelson, W.R.; Hirayama, H.; Rogers, D.W.O.
1986-01-01
The EGS (Electron Gamma Shower) code system was formally introduced in 1978 as a package, most commonly referred to as ESG3. It was designed to simulate electromagnetic cascades in various geometries and at energies up to a few thousand gigaelectron volts and down to cutoff kinetic energies of 0.1 MeV (photons) and 1 MeV (electrons). There have been many requests to extend EGS3 down to lower energies and this is a major, but not the only, reason for creating EGS4, which is now available for general distribution and is the subject of this presentation. A summary is given of the main features of the ESG4 code system, including statements about the physics that has been put into it and what can be realistically simulated. 6 refs.
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu
2016-01-01
The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
2008-02-29
Version 00 (1) Problems to be solved: MVP/GMVP II can solve eigenvalue and fixed-source problems. The multigroup code GMVP can solve forward and adjoint problems for neutron, photon and neutron-photon coupled transport. The continuous-energy code MVP can solve only the forward problems. Both codes can also perform time-dependent calculations. (2) Geometry description: MVP/GMVP employs combinatorial geometry to describe the calculation geometry. It describes spatial regions by the combination of the 3-dimensional objects (BODIes). Currently, themore » following objects (BODIes) can be used. - BODIes with linear surfaces : half space, parallelepiped, right parallelepiped, wedge, right hexagonal prism - BODIes with quadratic surface and linear surfaces : cylinder, sphere, truncated right cone, truncated elliptic cone, ellipsoid by rotation, general ellipsoid - Arbitrary quadratic surface and torus The rectangular and hexagonal lattice geometry can be used to describe the repeated geometry. Furthermore, the statistical geometry model is available to treat coated fuel particles or pebbles for high temperature reactors. (3) Particle sources: The various forms of energy-, angle-, space- and time-dependent distribution functions can be specified. See Abstract for more detail.« less
Taranenko, V; Meckbach, R; Degteva, M O; Bougrov, N G; Göksu, Y; Vorobiova, M I; Jacob, P
2003-04-01
An area located in the Southern Urals was contaminated in 1949-1956 as a result of radioactive waste releases into the Techa river by the Mayak Production Association. The external dose reconstruction of the Techa river dosimetry system (TRDS-2000) for the exposed population is based on an assessment of dose rates in air (DRA) obtained by modeling transport and deposition of radionuclides along the river for the time before 1952 and by gamma dose rate measurements since 1952. The aim of this paper is to contribute to a verification of the TRDS-2000 external dose assessment. Absorbed doses in bricks from a 130-year-old building in the heavily exposed Metlino settlement were measured by a luminescence technique. By the autumn of 1956 the population of Metlino had been evacuated, and then a water reservoir was created at the village location, which led to a change in the radioactive source geometry. Radiation transport calculations for assumed environmental sources before and since 1957 were performed with the MCNP Monte Carlo code. In combination with TRDS-2000 estimates for annual dose rates in air at the shore of the Techa river for the period 1949-1956 and contemporary dose rate in air measurements, absorbed doses in bricks were calculated. These calculations were performed deterministically with best estimates of the modeling parameters and stochastically by propagating uncertainty distributions through the calculation scheme. Assessed doses in bricks were found to be consistent with measured values within the uncertainty bounds, while their best estimates were approximately 15% lower than the luminescence measurements. PMID:12687379
Monte Carlo Transport for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
NASA Astrophysics Data System (ADS)
Zhao, X. Y.; Haworth, D. C.; Ren, T.; Modest, M. F.
2013-04-01
A computational fluid dynamics model for high-temperature oxy-natural gas combustion is developed and exercised. The model features detailed gas-phase chemistry and radiation treatments (a photon Monte Carlo method with line-by-line spectral resolution for gas and wall radiation - PMC/LBL) and a transported probability density function (PDF) method to account for turbulent fluctuations in composition and temperature. The model is first validated for a 0.8 MW oxy-natural gas furnace, and the level of agreement between model and experiment is found to be at least as good as any that has been published earlier. Next, simulations are performed with systematic model variations to provide insight into the roles of individual physical processes and their interplay in high-temperature oxy-fuel combustion. This includes variations in the chemical mechanism and the radiation model, and comparisons of results obtained with versus without the PDF method to isolate and quantify the effects of turbulence-chemistry interactions and turbulence-radiation interactions. In this combustion environment, it is found to be important to account for the interconversion of CO and CO2, and radiation plays a dominant role. The PMC/LBL model allows the effects of molecular gas radiation and wall radiation to be clearly separated and quantified. Radiation and chemistry are tightly coupled through the temperature, and correct temperature prediction is required for correct prediction of the CO/CO2 ratio. Turbulence-chemistry interactions influence the computed flame structure and mean CO levels. Strong local effects of turbulence-radiation interactions are found in the flame, but the net influence of TRI on computed mean temperature and species profiles is small. The ultimate goal of this research is to simulate high-temperature oxy-coal combustion, where accurate treatments of chemistry, radiation and turbulence-chemistry-particle-radiation interactions will be even more important.
TOPICAL REVIEW: Monte Carlo modelling of external radiotherapy photon beams
NASA Astrophysics Data System (ADS)
Verhaegen, Frank; Seuntjens, Jan
2003-11-01
An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources.
Monte Carlo modelling of external radiotherapy photon beams.
Verhaegen, Frank; Seuntjens, Jan
2003-11-01
An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources. PMID:14653555
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Fast photon-boundary intersection computation for Monte Carlo simulation of photon migration
NASA Astrophysics Data System (ADS)
Zhao, Xiaofen; Liu, Hongyan; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-01-01
Monte Carlo (MC) method is generally used as a "gold standard" technique to simulate photon transport in biomedical optics. However, it is quite time-consuming since abundant photon propagations need to be simulated in order to achieve an accurate result. In the case of complicated geometry, the computation speed is bound up with the calculation of the intersection between the photon transmission path and media boundary. The ray-triangle-based method is often used to calculate the photon-boundary intersection in the shape-based MC simulation for light propagation, but it is still relatively time-consuming. We present a fast way to determine the photon-boundary intersection. Triangle meshes are used to describe the boundary structure. A line segment instead of a ray is used to check if there exists a photon-boundary intersection, as the next location of the photon in light transports is determined by the step size. Results suggest that by simply replacing the conventional ray-triangle-based method with the proposed line segment-triangle-based method, the MC simulation for light propagation in the mouse model can be speeded up by more than 35%.
Recent advances in the Mercury Monte Carlo particle transport code
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
Coupled electron-photon radiation transport
Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.
2000-01-17
Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.
Monte Carlo Ion Transport Analysis Code.
2009-04-15
Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.
Monte Carlo simulation of photon-induced air showers
NASA Astrophysics Data System (ADS)
D'Ettorre Piazzoli, B.; di Sciascio, G.
1994-05-01
The EPAS code (Electron Photon-induced Air Showers) is a three-dimensional Monte Carlo simulation developed to study the properties of extensive air showers (EAS) generated by the interaction of high energy photons (or electrons) in the atmosphere. Results of the present simulation concern the longitudinal, lateral, temporal and angular distributions of electrons in atmospheric cascades initiated by photons of energies up to 10^3 TeV.
Trahan, Travis J.; Gentile, Nicholas A.
2012-09-10
Statistical uncertainty is inherent to any Monte Carlo simulation of radiation transport problems. In space-angle-frequency independent radiative transfer calculations, the uncertainty in the solution is entirely due to random sampling of source photon emission times. We have developed a modification to the Implicit Monte Carlo algorithm that eliminates noise due to sampling of the emission time of source photons. In problems that are independent of space, angle, and energy, the new algorithm generates a smooth solution, while a standard implicit Monte Carlo solution is noisy. For space- and angle-dependent problems, the new algorithm exhibits reduced noise relative to standard implicit Monte Carlo in some cases, and comparable noise in all other cases. In conclusion, the improvements are limited to short time scales; over long time scales, noise due to random sampling of spatial and angular variables tends to dominate the noise reduction from the new algorithm.
An efficient framework for photon Monte Carlo treatment planning.
Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J
2007-10-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby
Transport in Sawtooth photonic lattices
NASA Astrophysics Data System (ADS)
Weimann, Steffen; Morales-Inostroza, Luis; Real, Bastián; Cantillano, Camilo; Szameit, Alexander; Vicencio, Rodrigo A.
2016-06-01
We investigate, theoretically and experimentally, a photonic realization of a Sawtooth lattice. This special lattice exhibits two spectral bands, with one of them experiencing a complete collapse to a highly degenerate flat band for a special set of inter-site coupling constants. We report the ob- servation of different transport regimes, including strong transport inhibition due to the appearance of the non-diffractive flat band. Moreover, we excite localized Shockley surfaces states, residing in the gap between the two linear bands.
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems
NASA Astrophysics Data System (ADS)
Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc
2015-11-01
The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
A quantum photonic dissipative transport theory
NASA Astrophysics Data System (ADS)
Lei, Chan U.; Zhang, Wei-Min
2012-05-01
In this paper, a quantum transport theory for describing photonic dissipative transport dynamics in nanophotonics is developed. The nanophotonic devices concerned in this paper consist of on-chip all-optical integrated circuits incorporating photonic bandgap waveguides and driven resonators embedded in nanostructured photonic crystals. The photonic transport through waveguides is entirely determined from the exact master equation of the driven resonators, which is obtained by explicitly eliminating all the degrees of freedom of the waveguides (treated as reservoirs). Back-reactions from the reservoirs are fully taken into account. The relation between the driven photonic dynamics and photocurrents is obtained explicitly. The non-Markovian memory structure and quantum decoherence dynamics in photonic transport can then be fully addressed. As an illustration, the theory is utilized to study the transport dynamics of a photonic transistor consisting of a nanocavity coupled to two waveguides in photonic crystals. The controllability of photonic transport through the external driven field is demonstrated.
Applications of the Monte Carlo radiation transport toolkit at LLNL
NASA Astrophysics Data System (ADS)
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
Monte Carlo simulation of photon way in clinical laser therapy
NASA Astrophysics Data System (ADS)
Ionita, Iulian; Voitcu, Gabriel
2011-07-01
The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
West, J.T.; Murphy, J.
1988-01-01
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
NASA Astrophysics Data System (ADS)
Tabary, J.; Glière, A.
A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.
Hybrid Monte-Carlo method for simulating neutron and photon radiography
NASA Astrophysics Data System (ADS)
Wang, Han; Tang, Vincent
2013-11-01
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID
Photon beam characterization and modelling for Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Deng, Jun; Jiang, Steve B.; Kapur, Ajay; Li, Jinsheng; Pawlicki, Todd; Ma, C.-M.
2000-02-01
Photon beams of 4, 6 and 15 MV from Varian Clinac 2100C and 2300C/D accelerators were simulated using the EGS4/BEAM code system. The accelerators were modelled as a combination of component modules (CMs) consisting of a target, primary collimator, exit window, flattening filter, monitor chamber, secondary collimator, ring collimator, photon jaws and protection window. A full phase space file was scored directly above the upper photon jaws and analysed using beam data processing software, BEAMDP, to derive the beam characteristics, such as planar fluence, angular distribution, energy spectrum and the fractional contributions of each individual CM. A multiple-source model has been further developed to reconstruct the original phase space. Separate sources were created with accurate source intensity, energy, fluence and angular distributions for the target, primary collimator and flattening filter. Good agreement (within 2%) between the Monte Carlo calculations with the source model and those with the original phase space was achieved in the dose distributions for field sizes of 4 cm × 4 cm to 40 cm × 40 cm at source surface distances (SSDs) of 80-120 cm. The dose distributions in lung and bone heterogeneous phantoms have also been found to be in good agreement (within 2%) for 4, 6 and 15 MV photon beams for various field sizes between the Monte Carlo calculations with the source model and those with the original phase space.
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
Zhang, Ying; Feng, Yuanming; Ming, Xin
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study.
Zhang, Ying; Feng, Yuanming; Ming, Xin; Deng, Jun
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
Monte Carlo treatment planning for photon and electron beams
NASA Astrophysics Data System (ADS)
Reynaert, N.; van der Marck, S. C.; Schaart, D. R.; Van der Zee, W.; Van Vliet-Vroegindeweij, C.; Tomsej, M.; Jansen, J.; Heijmen, B.; Coghe, M.; De Wagter, C.
2007-04-01
During the last few decades, accuracy in photon and electron radiotherapy has increased substantially. This is partly due to enhanced linear accelerator technology, providing more flexibility in field definition (e.g. the usage of computer-controlled dynamic multileaf collimators), which led to intensity modulated radiotherapy (IMRT). Important improvements have also been made in the treatment planning process, more specifically in the dose calculations. Originally, dose calculations relied heavily on analytic, semi-analytic and empirical algorithms. The more accurate convolution/superposition codes use pre-calculated Monte Carlo dose "kernels" partly accounting for tissue density heterogeneities. It is generally recognized that the Monte Carlo method is able to increase accuracy even further. Since the second half of the 1990s, several Monte Carlo dose engines for radiotherapy treatment planning have been introduced. To enable the use of a Monte Carlo treatment planning (MCTP) dose engine in clinical circumstances, approximations have been introduced to limit the calculation time. In this paper, the literature on MCTP is reviewed, focussing on patient modeling, approximations in linear accelerator modeling and variance reduction techniques. An overview of published comparisons between MC dose engines and conventional dose calculations is provided for phantom studies and clinical examples, evaluating the added value of MCTP in the clinic. An overview of existing Monte Carlo dose engines and commercial MCTP systems is presented and some specific issues concerning the commissioning of a MCTP system are discussed.
Shield weight optimization using Monte Carlo transport calculations
NASA Technical Reports Server (NTRS)
Jordan, T. M.; Wohl, M. L.
1972-01-01
Outlines are given of the theory used in FASTER-3 Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries. The code has the additional capability of calculating the minimum weight layered unit shield configuration which will meet a specified dose rate constraint. It includes the treatment of geometric regions bounded by quadratic and quardric surfaces with multiple radiation sources which have a specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. Results are presented for sample problems involving primary neutron and both primary and secondary photon transport in a spherical reactor shield configuration. These results include the optimization of the shield configuration.
Monte Carlo radiation transport¶llelism
Cox, L. J.; Post, S. E.
2002-01-01
This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Plantnick, Steven
1999-01-01
The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.
Fiber transport of spatially entangled photons
NASA Astrophysics Data System (ADS)
Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G
2008-01-01
The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410
Transport of photons produced by lightning in clouds
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard
1991-01-01
The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.
Buck, R M; Hall, J M
1999-06-01
COG is a major multiparticle simulation code in the LLNL Monte Carlo radiation transport toolkit. It was designed to solve deep-penetration radiation shielding problems in arbitrarily complex 3D geometries, involving coupled transport of photons, neutrons, and electrons. COG was written to provide as much accuracy as the underlying cross-sections will allow, and has a number of variance-reduction features to speed computations. Recently COG has been applied to the simulation of high- resolution radiographs of complex objects and the evaluation of contraband detection schemes. In this paper we will give a brief description of the capabilities of the COG transport code and show several examples of neutron and gamma-ray imaging simulations. Keywords: Monte Carlo, radiation transport, simulated radiography, nonintrusive inspection, neutron imaging.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I Gde Eka Yani, Sitti; Haryanto, Freddy; Rhani, M. Fahdillah
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose
Photonic sensor applications in transportation security
NASA Astrophysics Data System (ADS)
Krohn, David A.
2007-09-01
There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.
Review of Monte Carlo modeling of light transport in tissues.
Zhu, Caigang; Liu, Quan
2013-05-01
A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318
A finite element approach for modeling photon transport in tissue.
Arridge, S R; Schweiger, M; Hiraoka, M; Delpy, D T
1993-01-01
The use of optical radiation in medical physics is important in several fields for both treatment and diagnosis. In all cases an analytic and computable model of the propagation of radiation in tissue is essential for a meaningful interpretation of the procedures. A finite element method (FEM) for deriving photon density inside an object, and photon flux at its boundary, assuming that the photon transport model is the diffusion approximation to the radiative transfer equation, is introduced herein. Results from the model for a particular case are given: the calculation of the boundary flux as a function of time resulting from a delta-function input to a two-dimensional circle (equivalent to a line source in an infinite cylinder) with homogeneous scattering and absorption properties. This models the temporal point spread function of interest in near infrared spectroscopy and imaging. The convergence of the FEM results are demonstrated, as the resolution of the mesh is increased, to the analytical expression for the Green's function for this system. The diffusion approximation is very commonly adopted as appropriate for cases which are scattering dominated, i.e., where mu s > mu a, and results from other workers have compared it to alternative models. In this article a high degree of agreement with a Monte Carlo method is demonstrated. The principle advantage of the FE method is its speed. It is in all ways as flexible as Monte Carlo methods and in addition can produce photon density everywhere, as well as flux on the boundary. One disadvantage is that there is no means of deriving individual photon histories. PMID:8497214
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.
Karim Karoui, Mohamed; Kharrati, Hedi
2013-07-15
Purpose: This paper presents the results of a series of calculations to determine buildup factors for ordinary concrete, baryte concrete, lead, steel, and iron in broad beam geometry for photons energies from 0.125 to 25.125 MeV at 0.250 MeV intervals.Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials.Results: The computation of the primary broad beams using buildup factors data was done for nine published megavoltage photon beam spectra ranging from 4 to 25 MV in nominal energies, representing linacs made by the three major manufacturers. The first tenth value layer and the equilibrium tenth value layer are calculated from the broad beam transmission for these nine primary megavoltage photon beam spectra.Conclusions: The results, compared with published data, show the ability of these buildup factor data to predict shielding transmission curves for the primary radiation beam. Therefore, the buildup factor data can be combined with primary, scatter, and leakage x-ray spectra to perform computation of broad beam transmission for barriers in radiotherapy shielding x-ray facilities.
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation
NASA Astrophysics Data System (ADS)
Yang, J.; Li, J. S.; Qin, L.; Xiong, W.; Ma, C.-M.
2004-06-01
The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams.
Monte Carlo Nucleon Meson Transport Code System.
2000-11-17
Version 00 NMTC/JAERI97 is an upgraded version of the code system NMTC/JAERI, which was developed in 1982 at JAERI and is based on the CCC-161/NMTC code system. NMTC/JAERI97 simulates high energy nuclear reactions and nucleon-meson transport processes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
NASA Astrophysics Data System (ADS)
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo source model for photon beam radiotherapy: photon source characteristics
Fix, Michael K.; Keall, Paul J.; Dawson, Kathryn; Siebers, Jeffrey V.
2004-11-01
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1x1 to 30x30 cm{sup 2} as well as a 10x10 cm{sup 2} field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within {+-}1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within {+-}2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model--including a charged particle source and the full PSD as input--was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Ilas, Dan; Eckerman, Keith F; Karagiannis, Harriet
2009-01-01
This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals were performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Monte Carlo Simulation of Light Transport in Tissue, Beta Version
2003-12-09
Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.
Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications
DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.
1995-12-31
The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.
Efficient, Automated Monte Carlo Methods for Radiation Transport
Kong, Rong; Ambrose, Martin; Spanier, Jerome
2012-01-01
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872
NASA Astrophysics Data System (ADS)
Carey, Matthew Glen
Particle transport of radionuclide photons using the Monte Carlo N-Particle computer code can be used to determine a portal monitor's photon detection efficiency, in units of counts per photon, for internally deposited radionuclides. Good agreement has been found with experimental results for radionuclides that emit higher energy photons, such as Cs-137 and Co-60. Detection efficiency for radionuclides that emit lower energy photons, such as Am-241, greatly depend on the effective discriminator energy level of the portal monitor as well as any attenuating material between the source and detectors. This evaluation uses a chi-square approach to determine the best fit discriminator level of a non-spectroscopic portal monitor when the effective discriminator level, in units of energy, is not known. Internal detection efficiencies were evaluated experimentally using an anthropomorphic phantom with NIST traceable sources at various internal locations, and by simulation using MCNP5. The results of this research find that MCNP5 can be an effective tool for simulation of photon detection efficiencies, given a known discriminator level, for internally and externally deposited radionuclides. In addition, MCNP5 can be used for bounding personnel doses from either internally or externally deposited mixtures of radionuclides.
Overview and applications of the Monte Carlo radiation transport kit at LLNL
Sale, K E
1999-06-23
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
Comparison of RTPS and Monte Carlo dose distributions in heterogeneous phantoms for photon beams.
Nakaguchi, Yuji; Araki, Fujio; Maruyama, Masato; Fukuda, Shogo
2010-04-20
The purpose of this study was to compare dose distributions from three different RTPS with those from Monte Carlo (MC) calculations and measurements, in heterogeneous phantoms for photon beams. This study used four algorithms for RTPS: AAA (analytical anisotropic algorithm) implemented in the Eclipse (Varian Medical Systems) treatment planning system, CC (collapsed cone) superposition from the Pinnacle (Philips), and MGS (multigrid superposition) and FFT (fast Fourier transform) convolution from XiO (CMS). The dose distributions from these algorithms were compared with those from MC and measurements in a set of heterogeneous phantoms. Eclipse/AAA underestimated the dose inside the lung region for low energies of 4 and 6 MV. This is because Eclipse/AAA do not adequately account for a scaling of the spread of the pencil (lateral electron transport) based on changes in the electron density at low photon energies. The dose distributions from Pinnacle/CC and XiO/MGS almost agree with those of MC and measurements at low photon energies, but increase errors at high energy of 15 MV, especially for a small field of 3x3 cm(2). The FFT convolution extremely overestimated the dose inside the lung slab compared to MC. The dose distributions from the superposition algorithms almost agree with those from MC as well as measured values at 4 and 6 MV. The dose errors for Eclipse/AAA are lager in lung model phantoms for 4 and 6 MV. It is necessary to use the algorithms comparable to superposition for accuracy of dose calculations in heterogeneous regions. PMID:20625219
Topologically robust transport of entangled photons in a 2D photonic system.
Mittal, Sunil; Orre, Venkata Vikram; Hafezi, Mohammad
2016-07-11
We theoretically study the transport of time-bin entangled photon pairs in a two-dimensional topological photonic system of coupled ring resonators. This system implements the integer quantum Hall model using a synthetic gauge field and exhibits topologically robust edge states. We show that the transport through edge states preserves temporal correlations of entangled photons whereas bulk transport does not preserve these correlations and can lead to significant unwanted temporal bunching or anti-bunching of photons. We study the effect of disorder on the quantum transport properties; while the edge transport remains robust, bulk transport is very susceptible, and in the limit of strong disorder, bulk states become localized. We show that this localization is manifested as an enhanced bunching/anti-bunching of photons. This topologically robust transport of correlations through edge states could enable robust on-chip quantum communication channels and delay lines for information encoded in temporal correlations of photons. PMID:27410836
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media
NASA Astrophysics Data System (ADS)
Doronin, Alex; Meglinski, Igor
2010-10-01
Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and the peculiarities of optical radiation propagation therein. Description of photon migration within the random media is based on the radiative transfer that forms a basis of Monte Carlo modelling of light propagation in complex turbid media like biological tissues. In current presentation with a further development of the Monte Carlo technique we introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provide an opportunity to escalate the performance of standard Monte Carlo simulation over 100 times.
GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media
NASA Astrophysics Data System (ADS)
Doronin, Alex; Meglinski, Igor
2011-03-01
Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and the peculiarities of optical radiation propagation therein. Description of photon migration within the random media is based on the radiative transfer that forms a basis of Monte Carlo modelling of light propagation in complex turbid media like biological tissues. In current presentation with a further development of the Monte Carlo technique we introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provide an opportunity to escalate the performance of standard Monte Carlo simulation over 100 times.
Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
Monte Carlo radiation transport: A revolution in science
Hendricks, J.
1993-04-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
NASA Astrophysics Data System (ADS)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim
2012-10-15
Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.
Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames
Mehta, R.S.; Haworth, D.C.; Modest, M.F.
2010-05-15
A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)
SIMIND Monte Carlo simulation of a single photon emission CT
Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.
2010-01-01
In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569
Analytical band Monte Carlo analysis of electron transport in silicene
NASA Astrophysics Data System (ADS)
Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.
2016-06-01
An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
2000-03-01
The dose rate conversion factors D(CF) (absorbed dose rate in air per unit activity per unit of soil mass, nGy h(-1) per Bq kg(-1)) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D(CF) values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20-30%) for the low energy photons. PMID:10688452
Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Frankl, Matthias; Macián-Juan, Rafael
2016-03-01
The development of intensity-modulated radiotherapy treatments delivering large amounts of monitor units (MUs) recently raised concern about higher risks for secondary malignancies. In this study, optimised combinations of several variance reduction techniques (VRTs) have been implemented in order to achieve a high precision in Monte Carlo (MC) radiation transport simulations and the calculation of in- and out-of-field photon and neutron dose-equivalent distributions in an anthropomorphic phantom using MCNPX, v.2.7. The computer model included a Varian Clinac 2100C treatment head and a high-resolution head phantom. By means of the applied VRTs, a relative uncertainty for the photon dose-equivalent distribution of <1 % in-field and 15 % in average over the rest of the phantom could be obtained. Neutron dose equivalent, caused by photonuclear reactions in the linear accelerator components at photon energies of approximately >8 MeV, has been calculated. Relative uncertainty, calculated for each voxel, could be kept below 5 % in average over all voxels of the phantom. Thus, a very detailed neutron dose distribution could be obtained. The achieved precision now allows a far better estimation of both photon and especially neutron doses out-of-field, where neutrons can become the predominant component of secondary radiation. PMID:26311702
Modeling photon transport in transabdominal fetal oximetry
NASA Astrophysics Data System (ADS)
Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton
2000-07-01
The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
The macro response Monte Carlo method for electron transport
NASA Astrophysics Data System (ADS)
Svatos, Michelle Marie
1998-10-01
This thesis proves the feasibility of basing depth dose calculations for electron radiotherapy on first- principles single scatter physics, in an amount of time that is comparable to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. This is possible because MRMC is a Local-to- Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or 'kugel'. A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV-8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry, which in this case is a CT (computed tomography) scan of a patient or phantom. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code against EGS4 and MCNP for depth dose in simple phantoms having density inhomogeneities. The energy deposition algorithms for spreading dose across 5-10 zones per kugel were tested. Most resulting depth dose calculations were within 2-3% of well-benchmarked codes, with one excursion to 4%. This thesis shows that the concept of using single scatter-based physics in clinical radiation
Yuan, Luqi; Xu, Shanshan; Fan, Shanhui
2015-11-15
We show that nonreciprocal unidirectional single-photon quantum transport can be achieved with the photonic Aharonov-Bohm effect. The system consists of a 1D waveguide coupling to two three-level atoms of the V-type. The two atoms, in addition, are each driven by an external coherent field. We show that the phase of the external coherent field provides a gauge potential for the photon states. With a proper choice of the phase difference between the two coherent fields, the transport of a single photon can exhibit unity contrast in its transmissions for the two propagation directions. PMID:26565819
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Electron transport in magnetrons by a posteriori Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
Monte Carlo Particle Transport Capability for Inertial Confinement Fusion Applications
Brantley, P S; Stuart, L M
2006-11-06
A time-dependent massively-parallel Monte Carlo particle transport calculational module (ParticleMC) for inertial confinement fusion (ICF) applications is described. The ParticleMC package is designed with the long-term goal of transporting neutrons, charged particles, and gamma rays created during the simulation of ICF targets and surrounding materials, although currently the package treats neutrons and gamma rays. Neutrons created during thermonuclear burn provide a source of neutrons to the ParticleMC package. Other user-defined sources of particles are also available. The module is used within the context of a hydrodynamics client code, and the particle tracking is performed on the same computational mesh as used in the broader simulation. The module uses domain-decomposition and the MPI message passing interface to achieve parallel scaling for large numbers of computational cells. The Doppler effects of bulk hydrodynamic motion and the thermal effects due to the high temperatures encountered in ICF plasmas are directly included in the simulation. Numerical results for a three-dimensional benchmark test problem are presented in 3D XYZ geometry as a verification of the basic transport capability. In the full paper, additional numerical results including a prototype ICF simulation will be presented.
Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation
Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun
2015-01-01
The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Monte Carlo study of photon fields from a flattening filter-free clinical accelerator
Vassiliev, Oleg N.; Titt, Uwe; Kry, Stephen F.; Poenisch, Falk; Gillin, Michael T.; Mohan, Radhe
2006-04-15
In conventional clinical linear accelerators, the flattening filter scatters and absorbs a large fraction of primary photons. Increasing the beam-on time, which also increases the out-of-field exposure to patients, compensates for the reduction in photon fluence. In recent years, intensity modulated radiation therapy has been introduced, yielding better dose distributions than conventional three-dimensional conformal therapy. The drawback of this method is the further increase in beam-on time. An accelerator with the flattening filter removed, which would increase photon fluence greatly, could deliver considerably higher dose rates. The objective of the present study is to investigate the dosimetric properties of 6 and 18 MV photon beams from an accelerator without a flattening filter. The dosimetric data were generated using the Monte Carlo programs BEAMnrc and DOSXYZnrc. The accelerator model was based on the Varian Clinac 2100 design. We compared depth doses, dose rates, lateral profiles, doses outside collimation, total and collimator scatter factors for an accelerator with and without a flatteneing filter. The study showed that removing the filter increased the dose rate on the central axis by a factor of 2.31 (6 MV) and 5.45 (18 MV) at a given target current. Because the flattening filter is a major source of head scatter photons, its removal from the beam line could reduce the out-of-field dose.
A deterministic computational model for the two dimensional electron and photon transport
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Nealy, John E.
2014-12-01
A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles θ where -π/2<θ<π/2, and corresponding symmetry is assumed for the range of azimuth angles (0<φ<2π). In the transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Photon transport through a nanohole by a moving atom
NASA Astrophysics Data System (ADS)
Afanasiev, A. E.; Melentiev, P. N.; Kuzin, A. A.; Kalatskiy, A. Yu; Balykin, V. I.
2016-05-01
We have proposed and investigated for the first time an efficient way of photon transport through a subwavelength hole by a moving atom. The transfer mechanism is based on the reduction of the wave packet of a single photon due to its absorption by an atom and, correspondingly, its localization in a volume is smaller than both the radiation wavelength and the nanohole size. The scheme realizes the transformation of a single-photon single-mode wave packet of the laser light into a single-photon multimode wave packet in free space.
Dissipationless electron transport in photon-dressed nanostructures.
Kibis, O V
2011-09-01
It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519
Few-photon transport in low-dimensional systems
Longo, Paolo; Schmitteckert, Peter; Busch, Kurt
2011-06-15
We analyze the role of quantum interference effects induced by an embedded two-level system on the photon transport properties in waveguiding structures that exhibit cutoffs (band edges) in their dispersion relation. In particular, we demonstrate that these systems invariably exhibit single-particle photon-atom bound states and strong effective nonlinear responses on the few-photon level. Based on this, we find that the properties of these photon-atom bound states may be tuned via the underlying dispersion relation and that their occupation can be controlled via multiparticle scattering processes. This opens an interesting route for controlling photon transport properties in a number of solid-state-based quantum optical systems and the realization of corresponding functional elements and devices.
Electron transport through a quantum dot assisted by cavity photons
NASA Astrophysics Data System (ADS)
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-01
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, ±Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry.
Electron transport through a quantum dot assisted by cavity photons.
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-11-20
We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, ±Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry. PMID:24132041
A Fano cavity test for Monte Carlo proton transport algorithms
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-15
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy
Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling
Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.
2008-01-14
Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1
Berg, Eric; Roncali, Emilie; Cherry, Simon R.
2015-01-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
Controlling single-photon transport with three-level quantum dots in photonic crystals
NASA Astrophysics Data System (ADS)
Yan, Cong-Hua; Jia, Wen-Zhi; Wei, Lian-Fu
2014-03-01
We investigate how to control single-photon transport along the photonic crystal waveguide with the recent experimentally demonstrated artificial atoms [i.e., Λ-type quantum dots (QDs)] [S. G. Carter et al., Nat. Photon. 7, 329 (2013), 10.1038/nphoton.2013.41] in an all-optical way. Adopting full quantum theory in real space, we analytically calculate the transport coefficients of single photons scattered by a Λ-type QD embedded in single- and two-mode photonic crystal cavities (PCCs), respectively. Our numerical results clearly show that the photonic transmission properties can be exactly manipulated by adjusting the coupling strengths of waveguide-cavity and QD-cavity interactions. Specifically, for the PCC with two degenerate orthogonal polarization modes coupled to a Λ-type QD with two degenerate ground states, we find that the photonic transmission spectra show three Rabi-splitting dips and the present system could serve as single-photon polarization beam splitters. The feasibility of our proposal with the current photonic crystal technique is also discussed.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space. PMID:20935713
Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.
Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats
2015-09-01
For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter σ(α), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519
Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967
Neutron contamination of Varian Clinac iX 10 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Yani, S.; Tursinah, R.; Rhani, M. F.; Soh, R. C. X.; Haryanto, F.; Arif, I.
2016-03-01
High energy medical accelerators are commonly used in radiotherapy to increase the effectiveness of treatments. As we know neutrons can be emitted from a medical accelerator if there is an incident of X-ray that hits any of its materials. This issue becomes a point of view of many researchers. The neutron contamination has caused many problems such as image resolution and radiation protection for patients and radio oncologists. This study concerns the simulation of neutron contamination emitted from Varian Clinac iX 10 MV using Monte Carlo code system. As neutron production process is very complex, Monte Carlo simulation with MCNPX code system was carried out to study this contamination. The design of this medical accelerator was modelled based on the actual materials and geometry. The maximum energy of photons and neutron in the scoring plane was 10.5 and 2.239 MeV, respectively. The number and energy of the particles produced depend on the depth and distance from beam axis. From these results, it is pointed out that the neutron produced by linac 10 MV photon beam in a typical treatment is not negligible.
Monte Carlo impurity transport modeling in the DIII-D transport
Evans, T.E.; Finkenthal, D.F.
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI`s unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed.
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries. PMID:21147784
Parallel processing implementation for the coupled transport of photons and electrons using OpenMP
NASA Astrophysics Data System (ADS)
Doerner, Edgardo
2016-05-01
In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
NASA Astrophysics Data System (ADS)
Liu, C.; Shu, D.; Kuzay, T. M.; Kersevan, R.
1996-09-01
Monte Carlo computer simulations have been successfully applied in the design of vacuum systems. These simulations allow the user to check the vacuum performance without the need of making a prototype of the vacuum system. In this paper we demonstrate the effectiveness and aptitude of these simulations in the design of differential pumps for synchrotron radiation beamlines. Eventually a good number of the beamline front ends at the Advanced Photon Source (APS) will use differential pumps to protect the synchrotron storage ring vacuum. A Monte Carlo computer program is used to calculate the molecular flow transmission and pressure distribution across the differential pump. A differential pump system, which consists of two 170 l/s ion pumps with three conductance-limiting apertures, was previously tested on an APS insertion-device beamline front end. Pressure distribution measurements using controlled leaks demonstrated a pressure difference of over two decades across the differential pump. A new differential pump utilizes a fixed mask between two 170 l/s ion pumps. The fixed mask, which has a conical channel with a small cross section of 4.5×4.5 mm2 in the far end, is used in the beamline to confine the photon beam. Monte Carlo simulations indicate that this configuration with the fixed mask significantly improves the pressure reduction capability of the differential pump, to ˜3×10-5, within the operational range from ˜10-4 to 10-10 Torr. The lower end of pressure is limited by outgassing from front-end components and the higher end by the pumping ability of the ion pump.
Photonic quantum transport in a nonlinear optical fiber
NASA Astrophysics Data System (ADS)
Hafezi, M.; Chang, D. E.; Gritsev, V.; Demler, E. A.; Lukin, M. D.
2011-06-01
We theoretically study the transmission of few-photon quantum fields through a strongly nonlinear optical medium. We develop a general approach to investigate nonequilibrium quantum transport of bosonic fields through a finite-size nonlinear medium and apply it to a recently demonstrated experimental system where cold atoms are loaded in a hollow-core optical fiber. We show that when the interaction between photons is effectively repulsive, the system acts as a single-photon switch. In the case of attractive interaction, the system can exhibit either antibunching or bunching, associated with the resonant excitation of bound states of photons by the input field. These effects can be observed by probing statistics of photons transmitted through the nonlinear fiber.
Robust light transport in non-Hermitian photonic lattices.
Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
Robust light transport in non-Hermitian photonic lattices
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w
Monte Carlo-based revised values of dose rate constants at discrete photon energies
Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju
2014-01-01
Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30–50 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20–50 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
NASA Astrophysics Data System (ADS)
Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.
2008-09-01
The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.
A Monte Carlo simulation for predicting photon return from sodium laser guide star
NASA Astrophysics Data System (ADS)
Feng, Lu; Kibblewhite, Edward; Jin, Kai; Xue, Suijian; Shen, Zhixia; Bo, Yong; Zuo, Junwei; Wei, Kai
2015-10-01
Sodium laser guide star is an ideal source for astronomical adaptive optics system correcting wave-front aberration caused by atmospheric turbulence. However, the cost and difficulties to manufacture a compact high quality sodium laser with power higher than 20W is not a guarantee that the laser will provide a bright enough laser guide star due to the physics of sodium atom in the atmosphere. It would be helpful if a prediction tool could provide the estimation of photon generating performance for arbitrary laser output formats, before an actual laser were designed. Based on rate equation, we developed a Monte Carlo simulation software that could be used to predict sodium laser guide star generating performance for arbitrary laser formats. In this paper, we will describe the model of our simulation, its implementation and present comparison results with field test data.
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SPn), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
MULTIDIMENSIONAL COUPLED PHOTON-ELECTRON TRANSPORT SIMULATIONS USING NEUTRAL PARTICLE SN CODES
Ilas, Dan; Williams, Mark L; Peplow, Douglas E.; Kirk, Bernadette Lugue
2008-01-01
During the past two years a study was underway at ORNL to assess the suitability of the popular SN neutral particle codes ANISN, DORT and TORT for coupled photon-electron calculations specific to external beam therapy of medical physics applications. The CEPXS-BFP code was used to generate the cross sections. The computational tests were performed on phantoms typical of those used in medical physics for external beam therapy, with materials simulated by water at different densities and the comparisons were made against Monte Carlo simulations that served as benchmarks. Although the results for one-dimensional calculations were encouraging, it appeared that the higher dimensional transport codes had fundamental difficulties in handling the electron transport. The results of two-dimensional simulations using the code DORT with an S16 fully symmetric quadrature set agree fairly with the reference Monte Carlo results but not well enough for clinical applications. While the photon fluxes are in better agreement (generally, within less than 5% from the reference), the discrepancy increases, sometimes very significantly, for the electron fluxes. The paper, however, focuses on the results obtained with the three-dimensional code TORT which had convergence difficulties for the electron groups. Numerical instabilities occurred in these groups. These instabilities were more pronounced with the degree of anisotropy of the problem.
Seif, F.; Bayatiani, M. R.
2015-01-01
Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its company’s instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (5×5 to 40×40 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 5×5 cm2 to 27% for 40×40 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Transport properties of pseudospin-1 photons (Presentation Recording)
NASA Astrophysics Data System (ADS)
Chan, Che Ting; Fang, Anan; Zhang, Zhao-Qing; Louie, Steven G.
2015-09-01
Pseudospin is of central importance in governing many unusual transport properties of graphene and other artificial systems which have pseudospins of 1/2. These unconventional transport properties are manifested in phenomena such as Klein tunneling, and collimation of electron beams in one-dimensional external potentials. Here we show that in certain photonic crystals (PCs) exhibiting conical dispersions at the center of Brillouin zone, the eigenstates near the "Dirac-like point" can be described by an effective spin-orbit Hamiltonian with a pseudospin of 1. This effective Hamiltonian describes within a unified framework the wave propagations in both positive and negative refractive index media which correspond to the upper and lower conical bands respectively. Different from a Berry phase of π for the Dirac cone of pseudospin-1/2 systems, the Berry phase for the Dirac-like cone turns out to be zero from this pseudospin-1 Hamiltonian. In addition, we found that a change of length scale of the PC can shift the Dirac-like cone rigidly up or down in frequency with its group velocity unchanged, hence mimicking a gate voltage in graphene and allowing for a simple mechanism to control the flow of pseudospin-1 photons. As a photonic analogue of electron potential, the length-scale induced Dirac-like point shift is effectively a photonic potential within the effective pseudospin-1 Hamiltonian description. At the interface of two different potentials, the 3-component spinor gives rise to distinct boundary conditions which do not require each component of the wave function to be continuous, leading to new wave transport behaviors as shown in Klein tunneling and supercollimation. For examples, the Klein tunneling of pseudospin-1 photons is much less anisotropic with reference to the incident angle than that of pseudospin-1/2 electrons, and collimation can be more robust with pseudospin-1 than pseudospin-1/2. The special wave transport properties of pseudospin-1 photons
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries
Hoffman, T.J.; Tang, J.S.; Parks, C.V.
1982-01-01
Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.
Hayakawa, Carole K; Spanier, Jerome; Venugopalan, Vasan
2014-02-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
Czarnecki, D; Voigts-Rhetz, P von; Shishechian, D Uchimura; Zink, K
2015-06-15
Purpose: Developing a fast and accurate calculation model to reconstruct the applied photon fluence from an external photon radiation therapy treatment based on an image recorded by an electronic portal image device (EPID). Methods: To reconstruct the initial photon fluence the 2D EPID image was corrected for scatter from the patient/phantom and EPID to generate the transmitted primary photon fluence. This was done by an iterative deconvolution using precalculated point spread functions (PSF). The transmitted primary photon fluence was then backprojected through the patient/phantom geometry considering linear attenuation to receive the initial photon fluence applied for the treatment.The calculation model was verified using Monte Carlo simulations performed with the EGSnrc code system. EPID images were produced by calculating the dose deposition in the EPID from a 6 MV photon beam irradiating a water phantom with air and bone inhomogeneities and the ICRP anthropomorphic voxel phantom. Results: The initial photon fluence was reconstructed using a single PSF and position dependent PSFs which depend on the radiological thickness of the irradiated object. Appling position dependent point spread functions the mean uncertainty of the reconstructed initial photon fluence could be reduced from 1.13 % to 0.13 %. Conclusion: This study presents a calculation model for fluence reconstruction from EPID images. The{sup Result} show a clear advantage when position dependent PSF are used for the iterative reconstruction. The basic work of a reconstruction method was established and further evaluations must be made in an experimental study.
Photon-Inhibited Topological Transport in Quantum Well Heterostructures
NASA Astrophysics Data System (ADS)
Farrell, Aaron; Pereg-Barnea, T.
2015-09-01
Here we provide a picture of transport in quantum well heterostructures with a periodic driving field in terms of a probabilistic occupation of the topologically protected edge states in the system. This is done by generalizing methods from the field of photon-assisted tunneling. We show that the time dependent field dresses the underlying Hamiltonian of the heterostructure and splits the system into sidebands. Each of these sidebands is occupied with a certain probability which depends on the drive frequency and strength. This leads to a reduction in the topological transport signatures of the system because of the probability to absorb or emit a photon. Therefore when the voltage is tuned to the bulk gap the conductance is smaller than the expected 2 e2/h . We refer to this as photon-inhibited topological transport. Nevertheless, the edge modes reveal their topological origin in the robustness of the edge conductance to disorder and changes in model parameters. In this work the analogy with photon-assisted tunneling allows us to interpret the calculated conductivity and explain the sum rule observed by Kundu and Seradjeh.
Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi
2009-07-01
The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k(pl) for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c(pl) and a chamber-dependent fluence correction factor h(pl) for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h(pl) values for the plane-parallel chambers were evaluated from the electron fluence correction factor phi(pl)w and wall correction factors P(wall,w) and P(wall,pl) for a combination of water or plastic materials. The calculated k(pl) and h(pl) values were verified by comparison with the measured values. A set of k(pl) values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k(pl) values also agreed within their combined uncertainty with the measured data. For electron beams, the c(pl) values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The phi(pl)w values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P(wall) between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h(pl) values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h(pl) values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
NASA Astrophysics Data System (ADS)
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
Monte Carlo Simulation Of H{sup -} Ion Transport
Diomede, P.; Longo, S.; Capitelli, M.
2009-03-12
In this work we study in detail the kinetics of H{sup -} ion swarms in velocity space: this provides a useful contrast to the usual literature in the field, where device features in configuration space are often included in detail but kinetic distributions are only marginally considered. To this aim a Monte Carlo model is applied, which includes several collision processes of H{sup -} ions with neutral particles as well as Coulomb collisions with positive ions. We characterize the full velocity distribution i.e. including its anisotropy, for different values of E/N, the atomic fraction and the H{sup +} mole fraction, which makes our results of interest for both source modeling and beam formation. A simple analytical theory, for highly dissociated hydrogen is formulated and checked by Monte Carlo calculations.
Lee, Y K
2005-01-01
TRIPOLI-4.3 Monte Carlo transport code has been used to evaluate the QUADOS (Quality Assurance of Computational Tools for Dosimetry) problem P4, neutron and photon response of an albedo-type thermoluminescence personal dosemeter (TLD) located on an ISO slab phantom. Two enriched 6LiF and two 7LiF TLD chips were used and they were protected, in front or behind, with a boron-loaded dosemeter-holder. Neutron response of the four chips was determined by counting 6Li(n,t)4He events using ENDF/B-VI.4 library and photon response by estimating absorbed dose (MeV g(-1)). Ten neutron energies from thermal to 20 MeV and six photon energies from 33 keV to 1.25 MeV were used to study the energy dependence. The fraction of the neutron and photon response owing to phantom backscatter has also been investigated. Detailed TRIPOLI-4.3 solutions are presented and compared with MCNP-4C calculations. PMID:16381740
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Multidimensional electron-photon transport with standard discrete ordinates codes
Drumm, C.R.
1995-12-31
A method is described for generating electron cross sections that are compatible with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electronphoton transport problems.
LDRD project 151362 : low energy electron-photon transport.
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Iwano, Takayuki; Umeyama, Shinji
2015-12-01
fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.
Source of statistical noises in the Monte Carlo sampling techniques for coherently scattered photons
Muhammad, Wazir; Lee, Sang Hoon
2013-01-01
Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. ) over squared momentum transfer (). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the data tables. PMID:22984278
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
NASA Astrophysics Data System (ADS)
Sarria, D.; Blelly, P.-L.; Forme, F.
2015-05-01
Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Yoriyaz, Helio; Moralles, Mauricio; Tarso Dalledone Siqueira, Paulo de; Costa Guimaraes, Carla da; Belonsi Cintra, Felipe; Santos, Adimir dos
2009-11-15
Purpose: Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. Methods: For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Results: Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons.Conclusion: Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors
NASA Astrophysics Data System (ADS)
Wu, Dufan; Xu, Xiaofei; Zhang, Li; Wang, Sen
2016-09-01
In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.
ITS; The intergrated TIGER series of electron/photon transport codes-version 3. 0
Halbleib, J. A.; Kensek, R.P. ); Valdez, G.D. ); Seltzer, S.M.; Berger, M.J. )
1992-08-01
This paper reports on the ITS system which is a powerful and use-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Version 3.0 is a major upgrade of the system with important improvements in the physical model, variance reduction, I/O, and user friendliness. Improvements to the cross-section generator include the replacement of Born-approximation bremsstrahlung cross sections with the results of numerical phase-shift calculations, the addition of coherent scattering and binding effects in incoherent scattering, an upgrade of collisional and radiative stopping powers, and a complete rewrite to Fortran 77 standards emphasizing Block-IF structure.
Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Owrangi, Amir M.
2014-08-01
The variations of depth and surface dose on the bone heterogeneity and beam angle were compared between unflattened and flattened photon beams using Monte Carlo simulations. Phase-space files of the 6 MV photon beams with field size of 10×10 cm2 were generated with and without the flattening filter based on a Varian TrueBeam linac. Depth and surface doses were calculated in a bone and water phantoms using Monte Carlo simulations (the EGSnrc-based code). Dose calculations were repeated with angles of the unflattened and flattened beams turned from 0° to 15°, 30°, 45°, 60°, 75° and 90° in the bone and water phantoms. Monte Carlo results of depth doses showed that compared to the flattened beam the unflattened photon beam had a higher dose in the build-up region but lower dose beyond the depth of maximum dose. Dose ratios of the unflattened to flattened beams were calculated in the range of 1.6-2.6 with beam angle varying from 0° to 90° in water. Similar results were found in the bone phantom. In addition, higher surface doses of about 2.5 times were found with beam angles equal to 0° and 15° in the bone and water phantoms. However, surface dose deviation between the unflattened and flattened beams became smaller with increasing beam angle. Dose enhancements due to the bone backscatter were also found at the water-bone and bone-water interfaces for both the unflattened and flattened beams in the bone phantom. With Monte Carlo beams cross-calibrated to the monitor unit in simulations, variations of depth and surface dose on the bone heterogeneity and beam angle were investigated and compared using Monte Carlo simulations. For the unflattened and flattened photon beams, the surface dose and range of depth dose ratios (unflattened to flattened beam) decreased with increasing beam angle. The dosimetric comparison in this study is useful in understanding the characteristics of unflattened photon beam on the depth and surface dose with bone heterogeneity.
NASA Astrophysics Data System (ADS)
Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.
2010-08-01
This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.
NASA Astrophysics Data System (ADS)
Schneider, A. M.; Flanner, M.; Yang, P.; Yi, B.; Huang, X.; Feldman, D.
2015-12-01
The spectral albedo of a snow-covered surface is sensitive to effective snow grain size. Snow metamorphism, then, affects the strength of surface albedo feedback and changes the radiative energy budget of the planet. The Near-Infrared Emitting Reflectance Dome (NERD) is an instrument in development designed to measure snow effective radius from in situ bidirectional reflectance factors (BRFs) by illuminating a surface with nadir positioned light emitting diodes centered around 1.30 and 1.55 microns. Better understanding the dependences of BRFs on snow grain shape and size is imperative to constraining measurements taken by the NERD. Here, we use the Monte Carlo method for photon transport to explore BRFs of snow surfaces of different shapes and sizes. In addition to assuming spherical grains and using Mie theory, we incorporate into the model the scattering phase functions and other single scattering properties of the following nine aspherical grain shapes: hexagonal columns, plates, hollow columns, droxtals, hollow bullet rosettes, solid bullet rosettes, 8-element column aggregates, 5-element plate aggregates, and 10-element plate aggregates. We present the simulated BRFs of homogeneous snow surfaces for these ten shape habits and show their spectral variability for a wide range of effective radii. Initial findings using Mie theory indicate that surfaces of spherical particles exhibit rather Lambertian reflectance for the two incident wavelengths used in the NERD and show a monotonically decreasing trend in black-sky albedo with increasing effective radius. These results are consistent with previous studies and also demonstrate good agreement with models using the two-stream approximation.
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, Daryoush
1999-12-01
BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 σ level and for all at the 2 σ level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 σ level) statistics, at all depths past
Control of photon transport properties in nanocomposite nanowires
NASA Astrophysics Data System (ADS)
Moffa, M.; Fasano, V.; Camposeo, A.; Persano, L.; Pisignano, D.
2016-02-01
Active nanowires and nanofibers can be realized by the electric-field induced stretching of polymer solutions with sufficient molecular entanglements. The resulting nanomaterials are attracting an increasing attention in view of their application in a wide variety of fields, including optoelectronics, photonics, energy harvesting, nanoelectronics, and microelectromechanical systems. Realizing nanocomposite nanofibers is especially interesting in this respect. In particular, methods suitable for embedding inorganic nanocrystals in electrified jets and then in active fiber systems allow for controlling light-scattering and refractive index properties in the realized fibrous materials. We here report on the design, realization, and morphological and spectroscopic characterization of new species of active, composite nanowires and nanofibers for nanophotonics. We focus on the properties of light-confinement and photon transport along the nanowire longitudinal axis, and on how these depend on nanoparticle incorporation. Optical losses mechanisms and their influence on device design and performances are also presented and discussed.
Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields
NASA Astrophysics Data System (ADS)
Czarnecki, D.; Zink, K.
2013-04-01
The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors \\Omega ^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}. The aim of this study is the determination of the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}; this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} is of the order of 1.2 at a field size of 1 × 1 cm2 for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm2, but below this field size there is a steep decrease of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
NASA Astrophysics Data System (ADS)
Wright, Tracy; Lye, Jessica E.; Ramanathan, Ganesan; Harty, Peter D.; Oliver, Chris; Webb, David V.; Butler, Duncan J.
2015-01-01
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method.
Wright, Tracy; Lye, Jessica E; Ramanathan, Ganesan; Harty, Peter D; Oliver, Chris; Webb, David V; Butler, Duncan J
2015-01-21
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a (60)Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method. PMID:25565406
1991-08-01
Version: 00 The original MORSE code was a multipurpose neutron and gamma-ray transport Monte Carlo code. It was designed as a tool for solving most shielding problems. Through the use of multigroup cross sections, the solution of neutron, gamma-ray, or coupled neutron-gamma-ray problems could be obtained in either the forward or adjoint mode. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry could be used with an albedo option available atmore » any material surface. Isotropic or anisotropic scattering up to a P16 expansion of the angular distribution was allowed. MORSE-CG incorporated the Mathematical Applications, Inc. (MAGI) combinatorial geometry routines. MORSE-B modifies the Monte Carlo neutron and photon transport computer code MORSE-CG by adding routines which allow various flexible options.« less
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation
Time series analysis of Monte Carlo neutron transport calculations
NASA Astrophysics Data System (ADS)
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Rising, Michael Evan
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies
NASA Astrophysics Data System (ADS)
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-02-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed.
Efficient photon transport in positron emission tomography simulations using VMC++
NASA Astrophysics Data System (ADS)
Kawrakow, I.; Mitev, K.; Gerganov, G.; Madzhunkov, J.; Kirov, A.
2008-02-01
vmcPET, a VMC++ based fast code for simulating photon transport through the patient geometry for use in positron emission tomography related calculations, is presented. vmcPET is shown to be between 250 and 425 times faster than GATE in completely analog mode and up to 50000 times faster when using advanced variance reduction techniques. Excellent agreement between vmcPET and EGSnrc and GATE benchmarks is found. vmcPET is coupled to GATE via phase-space files of particles emerging from the patient geometry.
Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies.
Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2016-01-01
A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232
Radiative transport in fluorescence-enhanced frequency domain photon migration.
Rasmussen, John C; Joshi, Amit; Pan, Tianshu; Wareing, Todd; McGhee, John; Sevick-Muraca, Eva M
2006-12-01
Small animal optical tomography has significant, but potential application for streamlining drug discovery and pre-clinical investigation of drug candidates. However, accurate modeling of photon propagation in small animal volumes is critical to quantitatively obtain accurate tomographic images. Herein we present solutions from a robust fluorescence-enhanced, frequency domain radiative transport equation (RTE) solver with unique attributes that facilitate its deployment within tomographic algorithms. Specifically, the coupled equations describing time-dependent excitation and emission light transport are solved using discrete ordinates (SN) angular differencing along with linear discontinuous finite-element spatial differencing on unstructured tetrahedral grids. Source iteration in conjunction with diffusion synthetic acceleration is used to iteratively solve the resulting system of equations. This RTE solver can accurately and efficiently predict ballistic as well as diffusion limited transport regimes which could simultaneously exist in small animals. Furthermore, the solver provides accurate solutions on unstructured, tetrahedral grids with relatively large element sizes as compared to commonly employed solvers that use step differencing. The predictions of the solver are validated by a series of frequency-domain, phantom measurements with optical properties ranging from diffusion limited to transport limited propagation. Our results demonstrate that the RTE solution consistently matches measurements made under both diffusion and transport-limited conditions. This work demonstrates the use of an appropriate RTE solver for deployment in small animal optical tomography. PMID:17278821
Hadronic Monte Carlo Transport: A Very Personal View
NASA Astrophysics Data System (ADS)
Prael, R. E.
Much to the disappointment of many, our distinguished speaker for the initial plenary session has been unable to attend our conference. I was prevailed upon by the conference organization to present a talk which, as prescribed, will be of a historical nature, but as the title describes, will also be a very personal view. Perhaps the opinions expressed will find sympathy with my associates around the world who have devoted their efforts to and found some satisfaction with providing the code tools for radiation transport to a large, and occasional anxious, community of users.
A Two-Dimensional Monte Carlo Code System for Linear Neutron Transport Calculations.
1980-04-24
Version 00 KIM (k-infinite-Monte Carlo) solves the steady-state linear neutron transport equation for a fixed source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional infinite thermal reactor lattice using the Monte Carlo method. In addition to the combinatorial description of domains, the program allows complex configurations to be represented by a discrete set of points whereby the calculation speed is greatly improved. Configurations are described as the result of overlaysmore » of elementary figures over a basic domain.« less
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Implicitly causality enforced solution of multidimensional transient photon transport equation.
Handapangoda, Chintha C; Premaratne, Malin
2009-12-21
A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided. PMID:20052050
Multidimensional electron-photon transport with standard discrete ordinates codes
Drumm, C.R.
1997-04-01
A method is described for generating electron cross sections that are comparable with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electron-photon transport problems. The key to the method is a simultaneous solution of the continuous-slowing-down (CSD) portion and elastic-scattering portion of the scattering source by the Goudsmit-Saunderson theory. The resulting multigroup-Legendre cross sections are much smaller than the true scattering cross sections that they represent. Under certain conditions, the cross sections are guaranteed positive and converge with a low-order Legendre expansion.
Szoke, A; Brooks, E D; McKinley, M; Daffin, F
2005-03-30
The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.
NASA Astrophysics Data System (ADS)
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano’s theorem. Additionally, Lewis’ approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano’s and Lewis’ approaches are stated in this new equation. Fano’s theorem is found not to apply in the presence of electromagnetic fields. Lewis’ theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-01-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866
NASA Astrophysics Data System (ADS)
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
NASA Astrophysics Data System (ADS)
Chow, James C. L.
2013-05-01
This study investigated the variations of the dose and dose distribution in a small-animal irradiation due to the photon beam energy and presence of inhomogeneity. Based on the same mouse computed tomography image set, three Monte Carlo phantoms namely, inhomogeneous, homogeneous and bone-tissue phantoms were used in this study. These phantoms were generated by overriding the relative electron density of no voxel (inhomogeneous), all voxel (homogeneous) and the bone voxel (bone-tissue) to one. 360° photon arcs with beam energies of 50-1250 kV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. It was found that the dose conformity increased with the increase of the photon beam energy from the kV to MV range. For the inhomogeneous mouse phantom, increasing the photon beam energy from 50 kV to 1250 kV increased about 21 times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 1.4 times higher when the bone inhomogeneity was not neglected using the 50 kV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose in the mouse irradiation can be found in the photon beams with energy range of 50-200 kV, and the dose enhancement decreases with an increase of the beam energy. Moreover, the MV photon beam has a higher dose at the isocenter, and a better dose conformity compared to the kV beam.
NASA Astrophysics Data System (ADS)
Chow, James C. L.
2012-10-01
This study investigated radiation dose variations in pre-clinical irradiation due to the photon beam energy and presence of tissue heterogeneity. Based on the same mouse computed tomography image dataset, three phantoms namely, heterogeneous, homogeneous and bone homogeneous were used. These phantoms were generated by overriding the relative electron density of no voxel (heterogeneous), all voxel (homogeneous) and the bone voxel (bone homogeneous) to one. 360° photon arcs with beam energies of 50 - 1250 keV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. Monte Carlo simulations were carried out in parallel using multiple nodes in a high-performance computing cluster. It was found that the dose conformity increased with the increase of the photon beam energy from the keV to MeV range. For the heterogeneous mouse phantom, increasing the photon beam energy from 50 keV to 1250 keV increased seven times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 2.7 times higher when the bone heterogeneity was not neglected using the 50 keV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose was found in the photon beams with energy range of 50 - 200 keV and the dose enhancement decreased with an increase of the beam energy. Moreover, the MeV photon beam had a higher dose at the isocenter, and a better dose conformity compared to the keV beam.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Hart, Vern P; Doyle, Timothy E
2013-09-01
A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080
Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin
2010-03-01
Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
Densmore, Jeffery D. . E-mail: jdd@lanl.gov; Urbatsch, Todd J. . E-mail: tmonster@lanl.gov; Evans, Thomas M. . E-mail: tme@lanl.gov; Buksas, Michael W. . E-mail: mwbuksas@lanl.gov
2007-03-20
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the
A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport
NASA Astrophysics Data System (ADS)
Robinson, P. B.; Peterson, J. D. L.
2005-12-01
The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to
NASA Astrophysics Data System (ADS)
Kim, Don-Soo
Dose measurements and radiation transport calculations were investigated for the interactions within the human brain of fast neutrons, slow neutrons, thermal neutrons, and photons associated with accelerator-based boron neutron capture therapy (ABNCT). To estimate the overall dose to the human brain, it is necessary to distinguish the doses from the different radiation sources. Using organic scintillators, human head phantom and detector assemblies were designed, constructed, and tested to determine the most appropriate dose estimation system to discriminate dose due to the different radiation sources that will ultimately be incorporated into a human head phantom to be used for dose measurements in ABNCT. Monoenergetic and continuous energy neutrons were generated via the 7Li(p,n)7Be reaction in a metallic lithium target near the reaction threshold using the 5.5 MV Van de Graaff accelerator at the University of Massachusetts Lowell. A human head phantom was built to measure and to distinguish the doses which result from proton recoils induced by fast neutrons, alpha particles and recoil lithium nuclei from the 10B(n,alpha)7Li reaction, and photons generated in the 7Li accelerator target as well as those generated inside the head phantom through various nuclear reactions at the same time during neutron irradiation procedures. The phantom consists of two main parts to estimate dose to tumor and dose to healthy tissue as well: a 3.22 cm3 boron loaded plastic scintillator which simulates a boron containing tumor inside the brain and a 2664 cm3 cylindrical liquid scintillator which represents the surrounding healthy tissue in the head. The Monte Carlo code MCNPX(TM) was used for the simulation of radiation transport due to neutrons and photons and extended to investigate the effects of neutrons and other radiation on the brain at various depths.
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
Light transport and lasing in complex photonic structures
NASA Astrophysics Data System (ADS)
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our
Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
Moskowitz, B.S.
2000-02-01
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.
A General-Purpose Monte Carlo Gamma-Ray Transport Code System for Minicomputers.
1981-08-27
Version 00 The OGRE code system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two codes which treat slab geometry. OGRE-P1 computes the dose on one side of a slab for a source on the other side, and HOTONE computes energy deposition in addition. The source may be monodirectional, isotropic, or cosine distributed.
Correlated few-photon transport in one-dimensional waveguides: Linear and nonlinear dispersions
Roy, Dibyendu
2011-04-15
We address correlated few-photon transport in one-dimensional waveguides coupled to a two-level system (TLS), such as an atom or a quantum dot. We derive exactly the single-photon and two-photon current (transmission) for linear and nonlinear (tight-binding sinusoidal) energy-momentum dispersion relations of photons in the waveguides and compare the results for the different dispersions. A large enhancement of the two-photon current for the sinusoidal dispersion has been seen at a certain transition energy of the TLS away from the single-photon resonances.
Rauf Abdullah, Nzar; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2016-09-21
We investigate theoretically the balance of the static magnetic and the dynamical photon forces in the electron transport through a quantum dot in a photon cavity with a single photon mode. The quantum dot system is connected to external leads and the total system is exposed to a static perpendicular magnetic field. We explore the transport characteristics through the system by tuning the ratio, [Formula: see text], between the photon energy, [Formula: see text], and the cyclotron energy, [Formula: see text]. Enhancement in the electron transport with increasing electron-photon coupling is observed when [Formula: see text]. In this case the photon field dominates and stretches the electron charge distribution in the quantum dot, extending it towards the contact area for the leads. Suppression in the electron transport is found when [Formula: see text], as the external magnetic field causes circular confinement of the charge density around the dot. PMID:27420809
Exponentially-convergent Monte Carlo for the 1-D transport equation
Peterson, J. R.; Morel, J. E.; Ragusa, J. C.
2013-07-01
We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)
Chen Yu; Bielajew, Alex F.; Litzenberg, Dale W.; Moran, Jean M.; Becchetti, Frederick D.
2005-12-15
It recently has been shown experimentally that the focusing provided by a longitudinal nonuniform high magnetic field can significantly improve electron beam dose profiles. This could permit precise targeting of tumors near critical areas and minimize the radiation dose to surrounding healthy tissue. The experimental results together with Monte Carlo simulations suggest that the magnetic confinement of electron radiotherapy beams may provide an alternative to proton or heavy ion radiation therapy in some cases. In the present work, the external magnetic field capability of the Monte Carlo code PENELOPE was utilized by providing a subroutine that modeled the actual field produced by the solenoid magnet used in the experimental studies. The magnetic field in our simulation covered the region from the vacuum exit window to the phantom including surrounding air. In a longitudinal nonuniform magnetic field, it is observed that the electron dose can be focused in both the transverse and longitudinal directions. The measured dose profiles of the electron beam are generally reproduced in the Monte Carlo simulations to within a few percent in the region of interest provided that the geometry and the energy of the incident electron beam are accurately known. Comparisons for the photon beam dose profiles with and without the magnetic field are also made. The experimental results are qualitatively reproduced in the simulation. Our simulation shows that the excessive dose at the beam entrance is due to the magnetic field trapping and focusing scattered secondary electrons that were produced in the air by the incident photon beam. The simulations also show that the electron dose profile can be manipulated by the appropriate control of the beam energy together with the strength and displacement of the longitudinal magnetic field.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
Monte Carlo simulation of photonic state tomography: a virtual Hanbury Brown and Twiss correlator
NASA Astrophysics Data System (ADS)
Murray, Eoin; Juska, Gediminas; Pelucchi, Emanuele
2016-05-01
This paper provides a theoretical background for the simulations of particular quantum optics experiments, namely, photon intensity correlation measurements. A practical example, adapted to polarisation-entangled photon pairs emitted from a quantum dot, is presented. The tool, a virtual Hanbury Brown and Twiss correlator, simulates polarisation-resolved the second-order correlation functions, which then can be used in a photonic state tomography procedure—a full description of a light source’s polarisation state. This educational tool is meant to improve general understanding of such quantum optics experiments.
Wang, Lilie L. W.; Klein, David; Beddar, A. Sam
2010-01-01
Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs’ responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At ±1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer
NASA Astrophysics Data System (ADS)
Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio
2012-03-01
The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Monte Carlo impurity transport modeling in the DIII-D Tokamak
Evans, T. E.; Finkenthal, D. F.
1998-09-28
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2008-01-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator∕detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ∼10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552
NASA Astrophysics Data System (ADS)
Kahraman, A.; Kaya, S.; Jaksic, A.; Yilmaz, E.
2015-05-01
Radiation-sensing Field Effect Transistors (RadFETs or MOSFET dosimeters) with SiO2 gate dielectric have found applications in space, radiotherapy clinics, and high-energy physics laboratories. More sensitive RadFETs, which require modifications in device design, including gate dielectric, are being considered for personal dosimetry applications. This paper presents results of a detailed study of the RadFET energy response simulated with PENELOPE Monte Carlo code. Alternative materials to SiO2 were investigated to develop high-efficiency new radiation sensors. Namely, in addition to SiO2, Al2O3 and HfO2 were simulated as gate material and deposited energy amounts in these layers were determined for photon irradiation with energies between 20 keV and 5 MeV. The simulations were performed for capped and uncapped configurations of devices irradiated by point and extended sources, the surface area of which is the same with that of the RadFETs. Energy distributions of transmitted and backscattered photons were estimated using impact detectors to provide information about particle fluxes within the geometrical structures. The absorbed energy values in the RadFETs material zones were recorded. For photons with low and medium energies, the physical processes that affect the absorbed energy values in different gate materials are discussed on the basis of modelling results. The results show that HfO2 is the most promising of the simulated gate materials.
Thiam, C; Bobin, C; Bouchard, J
2010-01-01
The implementation of the TDCR method (Triple to Double Coincidence Ratio) is based on a liquid scintillation system which comprises three photomultipliers; at LNHB, this counter can also be used in the beta-channel of a 4pi(LS)beta-gamma coincidence counting equipment. It is generally considered that the gamma-sensitivity of the liquid scintillation detector comes from the interaction of the gamma-photons in the scintillation cocktail but when introducing solid gamma-ray emitting sources instead of the scintillation vial, light emitted by the surrounding of the counter is observed. The explanation proposed in this article is that this effect comes from the emission of Cherenkov photons induced by Compton diffusion in the photomultiplier windows. In order to support this assertion, the creation and the propagation of Cherenkov photons inside the TDCR counter is simulated using the Monte Carlo code GEANT4. Stochastic calculations of double coincidences confirm the hypothesis of Cherenkov light produced in the photomultiplier windows. PMID:20031429
A simplified spherical harmonic method for coupled electron-photon transport calculations
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. Our theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. Previous analyses have indicated that the P{sub 1} DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S{sub N} equations, yet is very effective for the 1-D S{sub N} equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP{sub N} method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP{sub N} method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations.
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O'Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Cavity-photon-switched coherent transient transport in a double quantum waveguide
Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei
2014-12-21
We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.
Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome
2016-01-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
NASA Astrophysics Data System (ADS)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
Monte Carlo methods for neutrino transport in type-II supernovae
NASA Astrophysics Data System (ADS)
Janka, Hans-Thomas
Neutrinos play an important role in the type-II supernova scenario. Numerous approaches have been made in order to treat the generation and transport of neutrinos and the interactions between neutrinos and matter during stellar collapse and the shock propagation phase. However, all computationally fast methods have in common the fact that they cannot avoid simplifications in describing the interactions and, furthermore, have to use parameterizations in handling the Boltzmann transport equation. In order to provide an instrument for calibrating these treatments and for calculating neutrino spectra emitted from given stellar configurations, a Monte Carlo transport code was designed. Special attention was paid to an accurate computation of scattering kernels and source functions. Neutrino spectra for a hydrostatic stage of a 20 solar mass supernova simulation were generated and conclusions drawn concerning a late time revival of the stalled shock by neutrino heating.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Lacoste, V; Gressier, V
2007-01-01
The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation. PMID:17578872
Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie
2011-01-15
Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is
Topological Photonic Quasicrystals: Fractal Topological Spectrum and Protected Transport
NASA Astrophysics Data System (ADS)
Bandres, Miguel A.; Rechtsman, Mikael C.; Segev, Mordechai
2016-01-01
We show that it is possible to have a topological phase in two-dimensional quasicrystals without any magnetic field applied, but instead introducing an artificial gauge field via dynamic modulation. This topological quasicrystal exhibits scatter-free unidirectional edge states that are extended along the system's perimeter, contrary to the states of an ordinary quasicrystal system, which are characterized by power-law decay. We find that the spectrum of this Floquet topological quasicrystal exhibits a rich fractal (self-similar) structure of topological "minigaps," manifesting an entirely new phenomenon: fractal topological systems. These topological minigaps form only when the system size is sufficiently large because their gapless edge states penetrate deep into the bulk. Hence, the topological structure emerges as a function of the system size, contrary to periodic systems where the topological phase can be completely characterized by the unit cell. We demonstrate the existence of this topological phase both by using a topological index (Bott index) and by studying the unidirectional transport of the gapless edge states and its robustness in the presence of defects. Our specific model is a Penrose lattice of helical optical waveguides—a photonic Floquet quasicrystal; however, we expect this new topological quasicrystal phase to be universal.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
The Moment Condensed History Algorithm for Monte Carlo Electron Transport Simulations
Tolar, D R; Larsen, E W
2001-02-27
We introduce a new Condensed History algorithm for the Monte Carlo simulation of electron transport. To obtain more accurate simulations, the new algorithm preserves the mean position and the variance in the mean position exactly for electrons that have traveled a given path length and are traveling in a given direction. This is accomplished by deriving the zeroth-, first-, and second-order spatial moments of the Spencer-Lewis equation and employing this information directly in the Condensed History process. Numerical calculations demonstrate the advantages of our method over standard Condensed History methods.
Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport
NASA Astrophysics Data System (ADS)
Dureau, David; Poëtte, Gaël
2014-06-01
This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.
New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O'Brien, M J; Taylor, J M
2007-03-08
The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.
NASA Astrophysics Data System (ADS)
Ulmer, W.; Pyyry, J.; Kaissl, W.
2005-04-01
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.
Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
Piels, Molly; Zibar, Darko
2016-02-01
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258
3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.
Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E
2016-05-01
The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978
NASA Astrophysics Data System (ADS)
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-07-01
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Comparison of generalized transport and Monte-Carlo models of the escape of a minor species
NASA Technical Reports Server (NTRS)
Demars, H. G.; Barakat, A. R.; Schunk, R. W.
1993-01-01
The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
An implicit Monte Carlo method for simulation of impurity transport in divertor plasma
Suzuki, Akiko; Hayashi, Nobuhiko; Hatayama, Akiyoshi
1997-02-01
A new {open_quotes}implicit{close_quotes} Monte Carlo (IMC) method has been developed to simulate ionization and recombination processes of impurity ions in divertor plasmas. The IMC method takes into account many ionization and recombination processes during a time step {Delta}t. The time step is not limited by a condition, {Delta}t {much_lt} {tau}{sub min} ({tau}{sub min}; the minimum characteristic time of atomic processes), which is forced to be adopted in conventional Monte Carlo methods. We incorporate this method into a one-dimensional impurity transport model. In this transport calculation, impurity ions are followed with the time step about 10 times larger than that used in conventional methods. The average charge state of impurities, (Z), and the radiative cooling rate, L(T{sub e}), are calculated at the electron temperature T{sub e} in divertor plasmas. These results are compared with those obtained from the simple noncoronal model. 10 refs., 7 figs.
Hybrid two-dimensional Monte-Carlo electron transport in self-consistent electromagnetic fields
Mason, R.J.; Cranfill, C.W.
1985-01-01
The physics and numerics of the hybrid electron transport code ANTHEM are described. The need for the hybrid modeling of laser generated electron transport is outlined, and a general overview of the hybrid implementation in ANTHEM is provided. ANTHEM treats the background ions and electrons in a laser target as coupled fluid components moving relative to a fixed Eulerian mesh. The laser converts cold electrons to an additional hot electron component which evolves on the mesh as either a third coupled fluid or as a set of Monte Carlo PIC particles. The fluids and particles move in two-dimensions through electric and magnetic fields calculated via the Implicit Moment method. The hot electrons are coupled to the background thermal electrons by Coulomb drag, and both the hot and cold electrons undergo Rutherford scattering against the ion background. Subtleties of the implicit E- and B-field solutions, the coupled hydrodynamics, and large time step Monte Carlo particle scattering are discussed. Sample applications are presented.
NASA Astrophysics Data System (ADS)
Müller, Florian; Jenny, Patrick; Daniel, Meyer
2014-05-01
To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.
Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.
2008-07-15
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
NASA Astrophysics Data System (ADS)
Medhat, M. E.
2015-02-01
The main goal of this work is focused on testing the applicability of Geant4 electromagnetic models for studying mass attenuations for different types of composite materials at 59.5, 80, 356, 661.6, 1173.2 and 1332.5 keV photon energies. The simulated results of mass attenuation coefficients were compared with the experimental and theoretical data for the same samples and a good agreement has been observed. The results indicate that this process can be followed to determine the data on the attenuation of gamma-rays with the several energies in different materials.
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe
a Test Particle Model for Monte Carlo Simulation of Plasma Transport Driven by Quasineutrality
NASA Astrophysics Data System (ADS)
Kuhl, Nelson M.
1995-11-01
This paper is concerned with the problem of transport in controlled nuclear fusion as it applies to confinement in a tokamak or stellarator. We perform numerical experiments to validate a mathematical model of P. R. Garabedian in which the electric potential is determined by quasineutrality because of singular perturbation of the Poisson equation. The simulations are made using a transport code written by O. Betancourt and M. Taylor, with changes to incorporate our case studies. We adopt a test particle model naturally suggested by the problem of tracking particles in plasma physics. The statistics due to collisions are modeled by a drift kinetic equation whose numerical solution is based on the Monte Carlo method of A. Boozer and G. Kuo -Petravic. The collision operator drives the distribution function in velocity space towards the normal distribution, or Maxwellian. It is shown that details of the collision operator other than its dependence on the collision frequency and temperature matter little for transport, and the role of conservation of momentum is investigated. Exponential decay makes it possible to find the confinement times of both ions and electrons by high performance computing. Three -dimensional perturbations in the electromagnetic field model the anomalous transport of electrons and simulate the turbulent behavior that is presumably triggered by the displacement current. We make a convergence study of the method, derive scaling laws that are in good agreement with predictions from experimental data, and present a comparison with the JET experiment.
NASA Astrophysics Data System (ADS)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Monte Carlo impurity transport modeling in the DIII{endash}D Tokamak
Evans, T.E.; Finkenthal, D.F.
1998-09-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI{close_quote}s unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII{endash}D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50{percent} of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed. {copyright} {ital 1998 American Institute of Physics.}
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system
3D electro-thermal Monte Carlo study of transport in confined silicon devices
NASA Astrophysics Data System (ADS)
Mohamed, Mohamed Y.
The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non
NASA Astrophysics Data System (ADS)
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
NASA Astrophysics Data System (ADS)
Chauvet, Yves
1985-07-01
This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers.
Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail
2015-08-01
We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.
McKinley, M S; Brooks III, E D; Szoke, A
2002-12-03
We compare the Implicit Monte Carlo (IMC) technique to the Symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for test problems involving the evolution of a collisionally pumped trapping problem to its steady-state, the surface heating of a cold medium by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
McKinley, M S; Brooks III, E D; Szoke, A
2002-03-20
We compare the Implicit Monte Carlo (IMC) technique to the Symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for examples involving the evolution of a collisionally pumped trapping problem to steady-state, the surface heating of cold media by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
O'Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
Monte Carlo charge transport and photoemission from negative electron affinity GaAs photocathodes
NASA Astrophysics Data System (ADS)
Karkare, Siddharth; Dimitrov, Dimitre; Schaff, William; Cultrera, Luca; Bartnik, Adam; Liu, Xianghong; Sawyer, Eric; Esposito, Teresa; Bazarov, Ivan
2013-03-01
High quantum yield, low transverse energy spread, and prompt response time make GaAs activated to negative electron affinity an ideal candidate for a photocathode in high brightness photoinjectors. Even after decades of investigation, the exact mechanism of electron emission from GaAs is not well understood. Here, photoemission from such photocathodes is modeled using detailed Monte Carlo electron transport simulations. Simulations show a quantitative agreement with the experimental results for quantum efficiency, energy distributions of emitted electrons, and response time without the assumption of any ad hoc parameters. This agreement between simulation and experiment sheds light on the mechanism of electron emission and provides an opportunity to design novel semiconductor photocathodes with optimized performance.
Monte Carlo simulation of the transport of atoms in DC magnetron sputtering
NASA Astrophysics Data System (ADS)
Mahieu, S.; Buyle, G.; Depla, D.; Heirwegh, S.; Ghekiere, P.; De Gryse, R.
2006-02-01
In this work, we present a Monte Carlo simulation for the transport of sputtered particles during DC magnetron sputter deposition through the gas phase. The nascent sputter flux has been simulated by SRIM and TRIM, while the collisions of the sputtered atoms with the sputter gas are simulated with a screened Coulomb potential, with the Molière screening function and the Firsov screening length. The model calculates the flux of the atoms arriving at the substrate, their energy, direction and number of collisions they underwent. The model was verified by comparing the simulated thickness profiles with experimental profiles of deposited layers of Al, Cu and Zr/Y (85/15 wt%) on large substrates (ratio of the substrate diameter to the target diameter is 8). A good agreement between the experimental data and the simulations for sputter pressures (0.3-1 Pa) and target-substrate distances (7-16 cm) is obtained.
Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald
2011-06-01
MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5 nm and begin to decrease above 6 nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
An application of Fleck effective scattering to the difference formulation for photon transport
Daffin, F
2006-10-16
We introduce a new treatment of the difference formulation[1] for photon radiation transport without scattering in 1d slab geometry that is closely analogous to that of Fleck and Cummings[2] for the traditional formulation. The resulting form is free of implicit source terms and has the familiar effective scattering of the field of transport.
A Photon Free Method to Solve Radiation Transport Equations
Chang, B
2006-09-05
The multi-group discrete-ordinate equations of radiation transfer is solved for the first time by Newton's method. It is a photon free method because the photon variables are eliminated from the radiation equations to yield a N{sub group}XN{sub direction} smaller but equivalent system of equations. The smaller set of equations can be solved more efficiently than the original set of equations. Newton's method is more stable than the Semi-implicit Linear method currently used by conventional radiation codes.
NASA Astrophysics Data System (ADS)
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated (5~\\text{mm}× 5 mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging.
Looe, Hui Khee; Harder, Dietrich; Poppe, Björn
2015-08-21
The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311
NASA Astrophysics Data System (ADS)
Shi, X.; Ye, M.; Curtis, G. P.; Lu, D.; Meyer, P. D.; Yabusaki, S.; Wu, J.
2011-12-01
Assessment of parametric uncertainty for groundwater reactive transport models is challenging, because the models are highly nonlinear with respect to their parameters due to nonlinear reaction equations and process coupling. The nonlinearity may yield parameter distributions that are non-Gaussian and have multiple modes. For such parameter distributions, the widely used nonlinear regression methods may not be able to accurately quantify predictive uncertainty. One solution to this problem is to use Markov Chain Monte Carlo (MCMC) techniques. Both the nonlinear regression and MCMC methods are used in this study for quantification of parametric uncertainty of a surface complexation model (SCM), developed to simulate hexavalent uranium [U(VI)] transport in column experiments. Firstly, a brute force Monte Carlo (MC) simulation with hundreds of thousands of model executions is conducted to understand the surface of objective function and predictive uncertainty of uranium concentration. Subsequently, the Gauss-Marquardt-Levenberg method is applied to calibrate the model. It shows that, even with multiple initial guesses, the local optimization method has difficulty of finding the global optimum because of the rough surface of the objective function and local optima/minima due to model nonlinearity. Another problem of the nonlinear regression is the underestimation of predictive uncertainty, as both the linear and nonlinear confidence intervals are narrower than that obtained from the native MC simulation. Since the naïve MC simulation is computationally expensive, the above challenges for parameter estimation and predictive uncertainty analysis are addressed using a computationally efficient MCMC technique, the DiffeRential Evolution Adaptive Metropolis algorithm (DREAM) algorithm. The results obtained from running DREAM compared with those from brute force Monte Carlo simulations shown that MCMC not only successfully infers the multi-modals posterior probability
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Jiang, Runqing
2012-06-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6 MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Ercolano, B.; Dale, J.
2016-02-01
Ionizing feedback from massive stars dramatically affects the interstellar medium local to star-forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionization/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionization and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of a hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields on to Cartesian grids before performing radiative transfer/photoionization calculations, results in significant errors in the temperature and ionization structure of the region. Furthermore, we describe the mathematical process of tracing photon energy packets through a Voronoi tessellation, including optimizations, treating problematic cases and boundary conditions. We perform various benchmarks using both the original version of MOCASSIN and the modified version using the Voronoi tessellation. We show that for uniform grids, or equivalently a cubic lattice of cell generating points, the new Voronoi version gives the same results as the original Cartesian grid version of MOCASSIN for all benchmarks. For non-uniform initial conditions, such as using snapshots from smoothed particle hydrodynamics simulations, we show that the Voronoi version performs better than the Cartesian grid version, resulting in much better resolution in dense regions.
NASA Astrophysics Data System (ADS)
Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2016-09-01
We investigate theoretically the balance of the static magnetic and the dynamical photon forces in the electron transport through a quantum dot in a photon cavity with a single photon mode. The quantum dot system is connected to external leads and the total system is exposed to a static perpendicular magnetic field. We explore the transport characteristics through the system by tuning the ratio, \\hslash {ωγ}/\\hslash {ωc} , between the photon energy, \\hslash {ωγ} , and the cyclotron energy, \\hslash {ωc} . Enhancement in the electron transport with increasing electron–photon coupling is observed when \\hslash {ωγ}/\\hslash {ωc}>1 . In this case the photon field dominates and stretches the electron charge distribution in the quantum dot, extending it towards the contact area for the leads. Suppression in the electron transport is found when \\hslash {ωγ}/\\hslash {ωc}<1 , as the external magnetic field causes circular confinement of the charge density around the dot.
NASA Astrophysics Data System (ADS)
Liu, Jingyi; Zhang, Wenzhao; Li, Xun; Yan, Weibin; Zhou, Ling
2016-06-01
We investigate the two-photon transport properties inside one-dimensional waveguide side coupled to an atom-optomechanical system, aiming to control the two-photon transport by using the nonlinearity. By generalizing the scheme of Phys. Rev. A 90, 033832, we show that Kerr nonlinearity induced by the four-level atoms is remarkable and can make the photons antibunching, while the nonlinear interaction of optomechanical coupling participates in both the single photon and the two photon processes so that it can make the two photons exhibiting bunching and antibunching.
Rodriguez, Miguel; Sempau, Josep; Brualla, Lorenzo
2015-06-15
Purpose: The Monte Carlo simulation of electron transport in Linac targets using the condensed history technique is known to be problematic owing to a potential dependence of absorbed dose distributions on the electron step length. In the PENELOPE code, the step length is partially determined by the transport parameters C1 and C2. The authors have investigated the effect on the absorbed dose distribution of the values given to these parameters in the target. Methods: A monoenergetic 6.26 MeV electron pencil beam from a point source was simulated impinging normally on a cylindrical tungsten target. Electrons leaving the tungsten were discarded. Radial absorbed dose profiles were obtained at 1.5 cm of depth in a water phantom located at 100 cm for values of C1 and C2 in the target both equal to 0.1, 0.01, or 0.001. A detailed simulation case was also considered and taken as the reference. Additionally, lateral dose profiles were estimated and compared with experimental measurements for a 6 MV photon beam of a Varian Clinac 2100 for the cases of C1 and C2 both set to 0.1 or 0.001 in the target. Results: On the central axis, the dose obtained for the case C1 = C2 = 0.1 shows a deviation of (17.2% ± 1.2%) with respect to the detailed simulation. This difference decreases to (3.7% ± 1.2%) for the case C1 = C2 = 0.01. The case C1 = C2 = 0.001 produces a radial dose profile that is equivalent to that of the detailed simulation within the reached statistical uncertainty of 1%. The effect is also appreciable in the crossline dose profiles estimated for the realistic geometry of the Linac. In another simulation, it was shown that the error made by choosing inappropriate transport parameters can be masked by tuning the energy and focal spot size of the initial beam. Conclusions: The use of large path lengths for the condensed simulation of electrons in a Linac target with PENELOPE conducts to deviations of the dose in the patient or phantom. Based on the results obtained in
NASA Astrophysics Data System (ADS)
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J
1998-02-01
A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590
Ma, C M; Nahum, A E
1993-01-01
This paper presents the dose conversion and wall correction factors for Fricke dosimetry in high-energy photon beams calculated using both an analytical general cavity model and Monte Carlo techniques. The conversion factor is calculated as the ratio of the absorbed dose in water to that in the Fricke dosimeter solution with a water-walled vessel. The wall correction factor accounts for the change in the absorbed dose to the dosimeter solution caused by the inhomogeneous dosimeter wall material. A usercode based on the EGS4 Monte Carlo system, with the application of a correlated sampling variance reduction technique, has been employed in the calculations of these factors and the parameters used in the cavity model. Good agreement has been achieved between the predictions of the model and that obtained by direct Monte Carlo simulation and also with other workers' experiments. It is shown that Fricke dosimeters in common use cannot be considered to be 'large' detectors and therefore 'general cavity theory' should be applied in converting the dose to water. It is confirmed that plastic dosimeter vessels have a negligible wall effect. The wall correction factor for a 1 mm thick Pyrex-walled vessel varies with incident photon energy from 1.001 +/- 0.001 for a 60Co beam to 0.983 +/- 0.001 for a 24 MV (TPR(10)20 = 0.80) photon beam. This implies that previous Fricke measurements with glass-walled vessels should be re-evaluated. PMID:8426871
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better
Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
2015-05-01
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300-900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo
Bruno, Domenico; Frezzotti, Aldo Ghiroldi, Gian Pietro
2015-05-15
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
NASA Astrophysics Data System (ADS)
Pop, Eric; Dutton, Robert W.; Goodson, Kenneth E.
2004-11-01
We describe the implementation of a Monte Carlo model for electron transport in silicon. The model uses analytic, nonparabolic electron energy bands, which are computationally efficient and sufficiently accurate for future low-voltage (<1V) nanoscale device applications. The electron-lattice scattering is incorporated using an isotropic, analytic phonon-dispersion model, which distinguishes between the optical/acoustic and the longitudinal/transverse phonon branches. We show that this approach avoids introducing unphysical thresholds in the electron distribution function, and that it has further applications in computing detailed phonon generation spectra from Joule heating. A set of deformation potentials for electron-phonon scattering is introduced and shown to yield accurate transport simulations in bulk silicon across a wide range of electric fields and temperatures. The shear deformation potential is empirically determined at Ξu=6.8eV, and consequently, the isotropically averaged scattering potentials with longitudinal and transverse acoustic phonons are DLA=6.39eV and DTA=3.01eV, respectively, in reasonable agreement with previous studies. The room-temperature electron mobility in strained silicon is also computed and shown to be in better agreement with the most recent phonon-limited data available. As a result, we find that electron coupling with g-type phonons is about 40% lower, and the coupling with f-type phonons is almost twice as strong as previously reported.
NASA Astrophysics Data System (ADS)
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the
Single photon transport in two waveguides chirally coupled by a quantum emitter.
Cheng, Mu-Tian; Ma, Xiao-San; Zhang, Jia-Yan; Wang, Bing
2016-08-22
We investigate single photon transport in two waveguides coupled to a two-level quantum emitter (QE). With the deduced analytical scattering amplitudes, we show that under condition of the chiral coupling between the QE and the photon in the two waveguides, the QE can play the role of ideal quantum router to redirect a single photon incident from one waveguide into the other waveguide with a probability of 100% in the ideal condition. The influences of cross coupling between two waveguides and dissipations on the routing are also shown. PMID:27557274
NASA Astrophysics Data System (ADS)
Wang, Brian; Goldstein, Moshe; Xu, X. George; Sahoo, Narayan
2005-03-01
Recently, the theoretical framework of the adjoint Monte Carlo (AMC) method has been developed using a simplified patient geometry. In this study, we extended our previous work by applying the AMC framework to a 3D anatomical model called VIP-Man constructed from the Visible Human images. First, the adjoint fluxes for the prostate (PTV) and rectum and bladder (organs at risk (OARs)) were calculated on a spherical surface of 1 m radius, centred at the centre of gravity of PTV. An importance ratio, defined as the PTV dose divided by the weighted OAR doses, was calculated for each of the available beamlets to select the beam angles. Finally, the detailed doses in PTV and OAR were calculated using a forward Monte Carlo simulation to include the electron transport. The dose information was then used to generate dose volume histograms (DVHs). The Pinnacle treatment planning system was also used to generate DVHs for the 3D plans with beam angles obtained from the AMC (3D-AMC) and a standard six-field conformal radiation therapy plan (3D-CRT). Results show that the DVHs for prostate from 3D-AMC and the standard 3D-CRT are very similar, showing that both methods can deliver prescribed dose to the PTV. A substantial improvement in the DVHs for bladder and rectum was found for the 3D-AMC method in comparison to those obtained from 3D-CRT. However, the 3D-AMC plan is less conformal than the 3D-CRT plan because only bladder, rectum and PTV are considered for calculating the importance ratios. Nevertheless, this study clearly demonstrated the feasibility of the AMC in selecting the beam directions as a part of a treatment planning based on the anatomical information in a 3D and realistic patient anatomy.
NASA Astrophysics Data System (ADS)
Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu
2015-09-01
Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.
Fractional transport and photonic sub-diffusion in aperiodic dielectric metamaterials
NASA Astrophysics Data System (ADS)
Dal Negro, Luca; Wang, Yu; Inampudi, Sandeep
Using rigorous transfer matrix theory and full-vector Finite Difference Time Domain (FDTD) simulations in combination with Wavelet Transform Modulus Maxima analysis of multifractal spectra, we demonstrate all-dielectric aperiodic metamaterial structures that exhibit sub-diffusive photon transport properties that are widely tunable across the near-infrared spectral range. The proposed approach leverages the unprecedented spectral scalability offered by aperiodic photonic systems and demonstrates the possibility of achieving logarithmic Sinai sub-diffusion of photons for the first time. In particular we will show that the control of multifractal energy spectra and critical modes in aperiodic metamaterials with nanoscale dielectric components enables tuning of anomalous optical transport from sub- to super-diffusive dynamics, in close analogy with the electron dynamics in quasi-periodic potentials. Fractional diffusion equations models will be introduced for the efficient modeling of photon sub-diffusive processes in metamaterials and applications to diffraction-free propagation in aperiodic media will be provided. The ability to tailor photon transport phenomena in metamaterials with properties originating from aperiodic geometrical correlations can lead to novel functionalities and active devices that rely on anomalous photon sub-diffusion to control beam collimation and non-resonantly enhance light-matter interaction across multiple spectral bands.
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers
NASA Astrophysics Data System (ADS)
Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernández, Rodrigo; Ott, Christian D.
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
NASA Astrophysics Data System (ADS)
Agueda, N.
2008-04-01
We have developed a Monte Carlo model to simulate the transport of solar near-relativistic (NR; 30-300 keV) electrons along the interplanetary magnetic field (IMF), including adiabatic focusing, pitch-angle dependent scattering, and solar wind effects. By taking into account the angular response of the LEFS60 telescope of the EPAM experiment on board the "Advanced Composition Explorer" spacecraft, we have been able to transform simulated pitch-angle distributions into sectored intensities measured by the telescope. We have developed an algorithm that allows us, for the first time, to infer the best-fit transport conditions and the underlying solar injection profile of NR electrons from the deconvolution of observational sectored intensities. We have studied seven NR electron events observed by the LEFS60 telescope between 1998 and 2004 with the aim of estimating the roles that solar flares and CME-driven shocks play in the acceleration and injection of NR electrons, as well as the conditions of the electron transport along the IMF. In this set of seven NR electron events, we have identified two types of injection episodes in the derived injection profiles: short (< 15 min) and time-extended (> 1 h). The injection profile of three events shows both components; an initial injection episode of short duration, followed by a second much longer lasting episode; two events only show a time-extended injection episode; while the others show an injection profile composed by several short injection episodes. By comparing the timing of the injection with the associated electromagnetic emissions at the Sun, we have concluded that short injection episodes are preferentially associated with the injection of flare-accelerated particles, while longer lasting episodes are provided by CME-driven shocks.
Todo, A S; Hiromoto, G; Turner, J E; Hamm, R N; Wright, H A
1982-12-01
Previous calculations of the initial energies of electrons produced in water irradiated by photons are extended to 1 GeV by including pair and triplet production. Calculations were performed with the Monte Carlo computer code PHOEL-3, which replaces the earlier code, PHOEL-2. Tables of initial electron energies are presented for single interactions of monoenergetic photons at a number of energies from 10 keV to 1 GeV. These tables can be used to compute kerma in water irradiated by photons with arbitrary energy spectra to 1 GeV. In addition, separate tables of Compton-and pair-electron spectra are given over this energy range. The code PHOEL-3 is available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830. PMID:7152948
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei
2015-06-01
The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
PADOVANI, ENRICO
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, wasmore » developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.« less
Enhancing coherent transport in a photonic network using controllable decoherence
NASA Astrophysics Data System (ADS)
Biggerstaff, Devon N.; Heilmann, René; Zecevik, Aidan A.; Gräfe, Markus; Broome, Matthew A.; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G.; Kassal, Ivan
2016-04-01
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence.
Enhancing coherent transport in a photonic network using controllable decoherence.
Biggerstaff, Devon N; Heilmann, René; Zecevik, Aidan A; Gräfe, Markus; Broome, Matthew A; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G; Kassal, Ivan
2016-01-01
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence. PMID:27080915
Enhancing coherent transport in a photonic network using controllable decoherence
Biggerstaff, Devon N.; Heilmann, René; Zecevik, Aidan A.; Gräfe, Markus; Broome, Matthew A.; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G.; Kassal, Ivan
2016-01-01
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence. PMID:27080915
Vinke, Ruud; Olcott, Peter D.; Cates, Joshua W.; Levin, Craig S.
2014-01-01
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 × 3 × 20 mm3 LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807
MCNP: Photon benchmark problems
Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.
1991-09-01
The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Chow, J; Owrangi, A
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
NASA Astrophysics Data System (ADS)
Majaron, Boris; Milanič, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
NASA Astrophysics Data System (ADS)
Chancellor, Jeff; Guetersloh, Stephen
2011-03-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
Comparison of the Angular Dependence of Monte Carlo Particle Transport Modeling Software
NASA Astrophysics Data System (ADS)
Chancellor, Jeff; Guetersloh, Stephen
2011-04-01
Modeling nuclear interactions is relevant to cancer radiotherapy, space mission dosimetry and the use of heavy ion research beams. In heavy ion radiotherapy, fragmentation of the primary ions has the unwanted effect of reducing dose localization, contributing to a non-negligible dose outside the volume of tissue being treated. Fragmentation in spaceship walls, hardware and human tissue can lead to large uncertainties in estimates of radiation risk inside the crew habitat. Radiation protection mandates very conservative dose estimations, and reduction of uncertainties is critical to avoid limitations on allowed mission duration and maximize shielding design. Though fragment production as a function of scattering angle has not been well characterized, experimental simulation with Monte Carlo particle transport models have shown good agreement with data obtained from on-axis detectors with large acceptance angles. However, agreement worsens with decreasing acceptance angle, attributable in part to incorrect transverse momentum assumptions in the models. We will show there is an unacceptable angular discrepancy in modeling off-axis fragments produced by inelastic nuclear interaction of the primary ion. The results will be compared to published measurements of 400 MeV/nucleon carbon beams interacting in C, CH2, Al, Cu, Sn, and Pb targets.
Two-dimensional dynamical drainage-flow model with Monte Carlo transport and diffusion calculations
Garrett, A.J.; Smith, F.G. III
1982-09-01
A simplified drainage flow model was developed from the equations of motion and the mass continuity equation in a terrain-following coordinate system. The equations were reduced to a two-dimensional system by vertically integrating over the drainage layer. A numerical solution for the drainage layer depth and wind field was obtained using a fourth order finite difference scheme. A Monte Carlo simulation was used to calculate the transport and diffusion of tracer gases. Model simulations of drainage flow have been compared to observations from the 1980 Geysers area experiments. The Geysers area is mountainous, with steep slopes, some of which are steeper than 10/sup 0/. The model predictions of wind direction are good, but with speeds are not predicted as accurately. Simulation of perfluorocarbon tracer concentrations were in good agreement with observed values. Maximum tracer concentration was predicted to within a factor of five. While predicted plume arrival was somewhat early, the model closely predicted the duration of the passage for plume concentrations greater than 0.5 ppt. The two-dimensional model was found to work equally well in simulating drainage flows over the Savannah River Plant (SRP) and surrounding terrain with slopes of around 1/sup 0/. The model correctly predicted that drainage winds at SRP are usually shallower than 60 m, which is the height at which meteorological towers measure the winds in the SRP production areas. The modest computational requirements of the model make it suitable for use in screening potential industrial sites.
Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind
NASA Astrophysics Data System (ADS)
Barakat, A. R.; Ji, J.; Schunk, R. W.
2013-12-01
In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
Chow, J; Grigor, G
2014-08-15
This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230% larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.
Kinetic Monte Carlo Model of Charge Transport in Hematite (α-Fe2O3)
Kerisit, Sebastien N.; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of
NASA Astrophysics Data System (ADS)
Sakota, Daisuke; Takatani, Setsuo
2012-05-01
Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.
NASA Astrophysics Data System (ADS)
Cullum, Ian Derek
Single photon emission computed tomography offers the potential for quantification of the uptake of radiopharmaceuticals in-vivo. This thesis investigates some of the factors which limit the accuracy of these methods for measurements in the human brain and investigates how the errors can be reduced. Modifications to data collection devices rather than image reconstruction techniques are studied. To assess the impact of errors on images, a set of computer generated test objects were developed. These included standard Anger and Phelps phantoms and a series of slices of the human brain taken from an atlas of transmission tomography. System design involves a balance between resolution and noise in the image. The optimal resolution depends on the data collection system, the uptake characteristics of the radiopharmaceutical and object size. A method to determine this resolution was developed and showed a single-slice system employing focused, probe detectors to offer greater potential for quantification in the brain than systems based on multiple Anger gamma cameras. A collimation system must be designed to achieve the required resolution. Classical, geometric design is not satisfactory in the presence of scattering materials. For this reason a Monte Carlo simulation allowing flexible choice of collimator parameters and source distribution was developed. The simulation was fully tested and then used to predict the performance of collimators for probe and camera based systems. These assessments were carried out for the 'worst case source' which was a concept developed and validated to allow faster prediction of collimator performance. In essence the geometry of this source is such as to allow a resolution measurement to be made which represents the worst value expected from the system. The effect of changes in collimation on image quality was assessed using the computer phantoms and simulation of the data acquisition process on the singleslice system. These data were
NASA Astrophysics Data System (ADS)
Almansa, Julio F.; Guerrero, Rafael; Al-Dweri, Feras M. O.; Anguiano, Marta; Lallena, Antonio M.
2007-05-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated with other Monte Carlo codes is carried out. A χ2-like statistical test is proposed for these comparisons. PENELOPE and GEANT4 show a reasonable agreement for all energies analyzed and distances to the source larger than 1 cm. Significant differences are found at distances from the source up to 1 cm. A similar situation occurs between PENELOPE and EGS4.
Chow, James C.L.; Owrangi, Amir M.
2012-07-01
Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 Multiplication-Sign 1 cm{sup 2}) with gantry angles equal to 0 Degree-Sign , 90 Degree-Sign , and 180 Degree-Sign , and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0 Degree-Sign . The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180 Degree-Sign , but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized.
ITS Version 4.0: Electron/photon Monte Carlo transport codes
Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.
1995-07-01
The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.
Photon transport in a one-dimensional nanophotonic waveguide QED system
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Zeng, Xiaodong; Nha, Hyunchul; Zubairy, M. Suhail
2016-06-01
The waveguide quantum electrodynamics (QED) system may have important applications in quantum device and quantum information technology. In this article we review the methods being proposed to calculate photon transport in a one-dimensional (1D) waveguide coupled to quantum emitters. We first introduce the Bethe ansatz approach and the input–output formalism to calculate the stationary results of a single photon transport. Then we present a dynamical time-dependent theory to calculate the real-time evolution of the waveguide QED system. In the longtime limit, both the stationary theory and the dynamical calculation give the same results. Finally, we also briefly discuss the calculations of the multiphoton transport problems.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R
2014-04-21
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of
Monte-Carlo Impurity transport simulations in the edge of the DIII-D tokamak using the MCI code
Evans, T.E.; Mahdavi, M.A.; Sager, G.T.; West, W.P.; Fenstermacher, M.E.; Meyer, W.H.; Porter, G.D.
1995-07-01
A Monte-Carlo Impurity (MCI) transport code is used to follow trace impurities through multiple ionization states in realistic 2-D tokamak geometries. The MCI code is used to study impurity transport along the open magnetic field lines of the Scrape-off Layer (SOL) and to understand how impurities get into the core from the SOL. An MCI study concentrating on the entrainment of carbon impurities ions by deuterium background plasma into the DIII-D divertor is discussed. MCI simulation results are compared to experimental DIII-D carbon measurements.
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
NASA Astrophysics Data System (ADS)
Ding, George X.
2002-04-01
The purpose of this study is to provide detailed characteristics of incident photon beams for different field sizes and beam energies. This information is critical to the future development of accurate treatment planning systems. It also enhances our knowledge of radiotherapy photon beams. The EGS4 Monte Carlo code, BEAM, has been used to simulate 6 and 18 MV photon beams from a Varian Clinac-2100EX accelerator. A simulated realistic beam is stored in a phase space data file, which contains details of each particle's complete history including where it has been and where it has interacted. The phase space files are analysed to obtain energy spectra, angular distribution, fluence profile and mean energy profiles at the phantom surface for particles separated according to their charge and history. The accuracy of a simulated beam is validated by the excellent agreement between the Monte Carlo calculated and measured dose distributions. Measured depth-dose curves are obtained from depth-ionization curves by accounting for newly introduced chamber fluence corrections and the stopping-power ratios for realistic beams. The study presents calculated depth-dose components from different particles as well as calculated surface dose and contribution from different particles to surface dose across the field. It is shown that the increase of surface dose with the increase of the field size is mainly due to the increase of incident contaminant charged particles. At 6 MV, the incident charged particles contribute 7% to 21% of maximum dose at the surface when the field size increases from 10 × 10 to 40 × 40 cm2. At 18 MV, their contributions are up to 11% and 29% of maximum dose at the surface for 10 × 10 cm2 and 40 × 40 cm2 fields respectively. However, the fluence of these incident charged particles is less than 1% of incident photon fluence in all cases.
Walsh, J. A.; Palmer, T. S.; Urbatsch, T. J.
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V.
2007-12-15
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and
NASA Astrophysics Data System (ADS)
Chatterjee, Arka; Chakrabarti, Sandip Kumar; Ghosh, Himadri
2016-07-01
In a black hole accretion process, Comptonization of photons is a very important phenomenon. Photons generated from the Keplerian disk are intercepted by the hot thick disk which represents CENtrifugal pressure supported BOundary Layer or CENBOL. First, we construct generalized relativistic thick disks for different thermodynamical parameters. Inside the thick disk, we compute comptonization including the trajectory correction for photons due to space-time geometry of the black hole. Spectral difference between null geodesic comptonization and flat space-time comptonization are studied for different scenarios. In the next step, we connect the Comptonized photons to the observer to obtain energy dependent images of the system. We vary the inclination angle of the observer to study the spectral differences of emergent photons. We provide a catalogue of energy dependent images and spectra for future observations.
Griesheimer, D. P.; Stedry, M. H.
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas
Hoeoek, J.; Hellsten, T.
2009-11-26
Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
NASA Astrophysics Data System (ADS)
Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.
2016-03-01
In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.
Quantum transport of strongly interacting photons in a one-dimensional nonlinear waveguide
NASA Astrophysics Data System (ADS)
Hafezi, Mohammad; Chang, Darrick E.; Gritsev, Vladimir; Demler, Eugene; Lukin, Mikhail D.
2012-01-01
We present a theoretical technique for solving the quantum transport problem of a few photons through a one-dimensional, strongly nonlinear waveguide. We specifically consider the situation where the evolution of the optical field is governed by the quantum nonlinear Schrödinger equation. Although this kind of nonlinearity is quite general, we focus on a realistic implementation involving cold atoms loaded in a hollow-core optical fiber, where the atomic system provides a tunable nonlinearity that can be large even at a single-photon level. In particular, we show that when the interaction between photons is effectively repulsive, the transmission of multiphoton components of the field is suppressed. This leads to antibunching of the transmitted light and indicates that the system acts as a single-photon switch. On the other hand, in the case of attractive interaction, the system can exhibit either antibunching or bunching, which is in stark contrast to semiclassical calculations. We show that the bunching behavior is related to the resonant excitation of bound states of photons inside the system.
Controllable single-photon transport between remote coupled-cavity arrays
NASA Astrophysics Data System (ADS)
Qin, Wei; Nori, Franco
2016-03-01
We develop an approach for controllable single-photon transport between two remote one-dimensional coupled-cavity arrays, used as quantum registers, mediated by an additional one-dimensional coupled-cavity array, acting as a quantum channel. A single two-level atom located inside one cavity of the intermediate channel is used to control the long-range coherent quantum coupling between two remote registers, thereby functioning as a quantum switch. With a time-independent perturbative treatment, we find that the leakage of quantum information can in principle be made arbitrarily small. Furthermore, our method can be extended to realize a quantum router in multiregister quantum networks, where single-photons can be either stored in one of the registers or transported to another on demand. These results are confirmed by numerical simulations.
NASA Astrophysics Data System (ADS)
Martín, F.; García-García, J.; Oriols, X.; Suñé, J.
1999-02-01
A coupling model between a classical Monte Carlo simulator and a Liouville equation solver has been proposed with application to the simulation of vertical transport quantum devices in which extensive regions of the simulation domain behave classically. These devices can be partitioned in regions in which either a classical (Monte Carlo) or a quantum (Wigner formalism) treatment of carrier transport is required making a coupling scheme between adjacent regions necessary. According to this aim, the boundary conditions inferred from the Monte Carlo solver for the integration of the Liouville equation in the quantum regions, as well as the injecting scheme to the Monte Carlo regions provided by the Wigner distribution function at the boundaries have been earlier established. The results of this work, using a resonant tunneling diode as a reference device, show that the proposed technique is promising for the simulation of electron transport in quantum devices.
Lian, C P L; Othman, M A R; Cutajar, D; Butson, M; Guatelli, S; Rosenfeld, A B
2011-06-01
Skin dose is often the quantity of interest for radiological protection, as the skin is the organ that receives maximum dose during kilovoltage X-ray irradiations. The purpose of this study was to simulate the energy response and the depth dose water equivalence of the MOSkin radiation detector (Centre for Medical Radiation Physics (CMRP), University of Wollongong, Australia), a MOSFET-based radiation sensor with a novel packaging design, at clinical kilovoltage photon energies typically used for superficial/orthovoltage therapy and X-ray CT imaging. Monte Carlo simulations by means of the Geant4 toolkit were employed to investigate the energy response of the CMRP MOSkin dosimeter on the surface of the phantom, and at various depths ranging from 0 to 6 cm in a 30 × 30 × 20 cm water phantom. By varying the thickness of the tissue-equivalent packaging, and by adding thin metallic foils to the existing design, the dose enhancement effect of the MOSkin dosimeter at low photon energies was successfully quantified. For a 5 mm diameter photon source, it was found that the MOSkin was water equivalent to within 3% at shallow depths less than 15 mm. It is recommended that for depths larger than 15 mm, the appropriate depth dose water equivalent correction factors be applied to the MOSkin at the relevant depths if this detector is to be used for depth dose assessments. This study has shown that the Geant4 Monte Carlo toolkit is useful for characterising the surface energy response and depth dose behaviour of the MOSkin. PMID:21559885
NASA Astrophysics Data System (ADS)
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
NASA Astrophysics Data System (ADS)
Bobik, P.; Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; La Vacca, G.; Pensotti, S.; Putis, M.; Rancoita, P. G.; Rozza, D.; Tacconi, M.; Zannoni, M.
2016-05-01
The cosmic rays propagation inside the heliosphere is well described by a transport equation introduced by Parker in 1965. To solve this equation, several approaches were followed in the past. Recently, a Monte Carlo approach became widely used in force of its advantages with respect to other numerical methods. In this approach the transport equation is associated to a fully equivalent set of stochastic differential equations (SDE). This set is used to describe the stochastic path of quasi-particle from a source, e.g., the interstellar space, to a specific target, e.g., a detector at Earth. We present a comparison of forward-in-time and backward-in-time methods to solve the cosmic rays transport equation in the heliosphere. The Parker equation and the related set of SDE in the several formulations are treated in this paper. For the sake of clarity, this work is focused on the one-dimensional solutions. Results were compared with an alternative numerical solution, namely, Crank-Nicolson method, specifically developed for the case under study. The methods presented are fully consistent each others for energy greater than 400 MeV. The comparison between stochastic integrations and Crank-Nicolson allows us to estimate the systematic uncertainties of Monte Carlo methods. The forward-in-time stochastic integrations method showed a systematic uncertainty <5%, while backward-in-time stochastic integrations method showed a systematic uncertainty <1% in the studied energy range.
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Schaefer, C; Jansen, A P J
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
NASA Astrophysics Data System (ADS)
Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad
2015-02-01
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
NASA Astrophysics Data System (ADS)
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-10-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
NASA Astrophysics Data System (ADS)
Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
An evaluation of the difference formulation for photon transport in a two level system
Daffin, Frank . E-mail: daffin1@llnl.gov; McKinley, Michael Scott . E-mail: mckinley9@llnl.gov; Brooks, Eugene D. . E-mail: brooks3@llnl.gov; Szoeke, Abraham . E-mail: szoke1@llnl.gov
2005-03-20
In this paper, we extend the difference formulation for radiation transport to the case of a single atomic line. We examine the accuracy, performance and stability of the difference formulation within the framework of the Symbolic Implicit Monte Carlo method. The difference formulation, introduced for thermal radiation by some of the authors, has the unique property that the transport equation is written in terms that become small for thick systems. We find that the difference formulation has a significant advantage over the standard formulation for a thick system. The correct treatment of the line profile, however, requires that the difference formulation in the core of the line be mixed with the standard formulation in the wings, and this may limit the advantage of the method. We bypass this problem by using the gray approximation. We develop three Monte Carlo solution methods based on different degrees of implicitness for the treatment of the source terms, and we find only conditional stability unless the source terms are treated fully implicitly.
NASA Astrophysics Data System (ADS)
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114
Cascaded two-photon spectroscopy of Yb atoms with a transportable effusive atomic beam apparatus.
Song, Minsoo; Yoon, Tai Hyun
2013-02-01
We present a transportable effusive atomic beam apparatus for cascaded two-photon spectroscopy of the dipole-forbidden transition (6s(2)(1)S0↔ 6s7s (1)S0) of Yb atoms. An ohmic-heating effusive oven is designed to have a reservoir volume of 1.6 cm(3) and a high degree of atomic beam collimation angle of 30 mrad. The new atomic beam apparatus allows us to detect the spontaneously cascaded two-photons from the 6s7s(1)S0 state via the intercombination 6s6p(3)P1 state with a high signal-to-noise ratio even at the temperature of 340 °C. This is made possible in our apparatus because of the enhanced atomic beam flux and superior detection solid angle. PMID:23464193
A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
Gentile, N A; Trahan, T J
2011-03-22
We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.
Anigstein, Robert; Erdman, Michael C; Ansari, Armin
2016-06-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude ofmore » particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude ofmore » particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model
GOORLEY, TIM
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
GOORLEY, TIM
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
NASA Astrophysics Data System (ADS)
Wang, Ping; Yuan, Hongwu; Mei, Haiping; Zhang, Qianghua
2013-08-01
Study the laser pulses transmission time characteristics in discrete random medium using the Monte Carlo method. Firstly, the medium optical parameters have been given by OPAC software. Then, create a Monte Carlo model and Monte Carlo simulation of photon transport behavior of a large number of tracking, statistics obtain the photon average arrival time and average pulse broadening case, the calculation result with calculation results of two-frequency mutual coherence function are compared, the results are very consistent. Finally, medium impulse response function given by polynomial fitting method can be used to correct discrete random medium inter-symbol interference in optical communications and reduce the rate of system error.
NASA Astrophysics Data System (ADS)
Paolella, Arthur C.; Jemison, William D.; Borlando, Javier; Wang, Jun
2004-10-01
The expected increase in space and terrestrial services that include two-way fixed, SATCOM, CATV and mobile wireless services require expanding the system capacity. This expansion has created an opportunity for the utilization of the demonstrated photonic transport systems in wireless networks. System demonstrations and architectural developments have been proposed for distribution of communication services over fiber. Termed Fiber Radio and Hybrid Fiber Wireless, these systems offer the potential to improve services and reduce base station costs through increased bandwidth and ease of installation. We have developed and demonstrated DWDM broadband photonic transport systems able to meet the requirements for IS-95 Personal Communications Services operating at 1.9 GHz and Broadband Wireless Internet operating over the band of 2.5 to 2.7 GHz. Each DWDM channel operates from 1 to 3 GHz transporting services up to 80 Km. Solutions are being sought for low cost transmitters to meet DWDM SATCOM system requirements include extending the transmission distance to over 100 Km with a bandwidth that exceeds multiple octaves. These new requirements put high performance demands on the optical components. We have developed high performance transmitters based on electro-absorption modulated lasers (EML) that can meet SATCOM requirements. We have shown that the EML is capable of providing the required CNR of 32 dB for satellite transmission in the band of 950 to 2150 MHz over a 100 Km distance. In addition, we are investigating a new modulation technique, Microwave Photonic Vector Modulation (MPVM), which has the potential for wideband transmission in DWDM systems.
Program EPICP: Electron photon interaction code, photon test module. Version 94.2
Cullen, D.E.
1994-09-01
The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.
NASA Astrophysics Data System (ADS)
Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-04-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
NASA Astrophysics Data System (ADS)
Faris, Gregory W.; Alexandrakis, George; Busch, David R.; Patterson, Michael S.
2001-06-01
We examine the ability to recover the optical properties of a two-layer turbid medium using multi-distance frequency domain reflectance measurements and a hybrid Monte Carlo-- diffusion model. Frequency domain measurements are performed on two-layer liquid tissue phantoms simulating skin on muscle and skin on fat. Particular care to systematic effects in the photomultiplier is required for the measurements at short source-detectors distances. The model converges when fitting five free parameters (the optical properties of the upper and lower layers and the upper layer thickness). However, discrepancies between experimental and model yield insufficient accuracy for the absorption coefficient of the upper layer.
NASA Astrophysics Data System (ADS)
Stephani, K. A.; Goldstein, D. B.; Varghese, P. L.
2012-07-01
A general approach for achieving consistency in the transport properties between direct simulation Monte Carlo (DSMC) and Navier-Stokes (CFD) solvers is presented for five-species air. Coefficients of species diffusion, viscosity, and thermal conductivities are considered. The transport coefficients that are modeled in CFD solvers are often obtained by expressions involving sets of collision integrals, which are obtained from more realistic intermolecular potentials (i.e., ab initio calculations). In this work, the self-consistent effective binary diffusion and Gupta et al.-Yos tranport models are considered. The DSMC transport coefficients are approximated from Chapman-Enskog theory in which the collision integrals are computed using either the variable hard sphere (VHS) and variable soft sphere (VSS) (phenomenological) collision cross section models. The VHS and VSS parameters are then used to adjust the DSMC transport coefficients in order to achieve a best-fit to the coefficients computed from more realistic intermolecular potentials over a range of temperatures. The best-fit collision model parameters are determined for both collision-averaged and collision-specific pairing approaches using the Nelder-Mead simplex algorithm. A consistent treatment of the diffusion, viscosity, and thermal conductivities is presented, and recommended sets of best-fit VHS and VSS collision model parameters are provided for a five-species air mixture.
NASA Astrophysics Data System (ADS)
Sakota, Daisuke; Takatani, Setsuo
2011-07-01
We have sought for non-invasive diagnosis of blood during the extracorporeal circulation support. To achieve the goal, we have newly developed a photon-cell interactive Monte Carlo (pciMC) model for optical propagation through blood. The pciMC actually describes the interaction of photons with 3-dimentional biconcave RBCs. The scattering is described by micro-scopical RBC boundary condition based on geometric optics. By using pciMC, we modeled the RBCs inside the extracorporeal circuit will be oriented by the blood flow. The RBCs' orientation was defined as their long axis being directed to the center of the circulation tube. Simultaneously the RBCs were allowed to randomly rotate about the long axis direction. As a result, as flow rate increased, the orientation rate increased and converged to approximately 22% at 0.5 L/min flow rate and above. And finally, by using this model, the pciMC non-invasively and absolutely predicted Hct and hemoglobin with the accuracies of 0.84+/-0.82 [HCT%] and 0.42+/-0.28 [g/dL] respectively against measurements by a blood gas analyzer.
Monte Carlo simulation studies of spin transport in graphene armchair nanoribbons
NASA Astrophysics Data System (ADS)
Salimath, Akshay Kumar; Ghosh, Bahniman
2014-10-01
The research in the area of spintronics is gaining momentum due to the promise spintronics based devices have shown. Since spin degree of freedom of an electron is used to store and process information, spintronics can provide numerous advantages over conventional electronics by providing new functionalities. In this article, we study spin relaxation in graphene nanoribbons (GNR) of armchair type by employing semiclassical Monte Carlo approach. D'yakonov-Perel' relaxation due to structural inversion asymmetry (Rashba spin-orbit coupling) and Elliott-Yafet (EY) relaxation cause spin dephasing in armchair graphene nanoribbons. We investigate spin relaxation in α-,β- and γ-armchair GNR with varying width and temperature.
Monte Carlo analysis of light transport in tissue in the mid-infrared domain
NASA Astrophysics Data System (ADS)
Kunapareddy, Nagapratima; Mourant, Judith R.; Aida, Toru
2004-07-01
The mid-infrared wavelength region contains characteristic peaks of several of the biochemical constituents of tissue. Recently, it has been shown that measurements of mammalian cell suspensions can provide estimates of biochemical composition and consequently information on the growth stage. This information may be used to identify cancerous tissue in-vivo. To facilitate the development of an in-vivo diagnostic technique, we have performed simulations of photon propagation and light collection in epithelial tissue, given a specific optical probe geometry.
Glaser, A; Zhang, R; Gladstone, D; Pogue, B
2014-06-01
Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.
Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo
2010-02-15
Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
Joshi, Chandra P; Darko, Johnson; Vidyasagar, P B; Schreiner, L John
2010-04-01
Underdosing of treatment targets can occur in radiation therapy due to electronic disequilibrium around air-tissue interfaces when tumors are situated near natural air cavities. These effects have been shown to increase with the beam energy and decrease with the field size. Intensity modulated radiation therapy (IMRT) and tomotherapy techniques employ combinations of multiple small radiation beamlets of varying intensities to deliver highly conformal radiation therapy. The use of small beamlets in these techniques may therefore result in underdosing of treatment target in the air-tissue interfaces region surrounding an air cavity. This work was undertaken to investigate dose reductions near the air-water interfaces of 1x1x1 and 3x3x3 cm(3) air cavities, typically encountered in the treatment of head and neck cancer utilizing radiation therapy techniques such as IMRT and tomotherapy using small fields of Co-60, 6 MV and 15 MV photons. Additional investigations were performed for larger photon field sizes encompassing the entire air-cavity, such as encountered in conventional three dimensional conformal radiation therapy (3DCRT) techniques. The EGSnrc/DOSXYZnrc Monte Carlo code was used to calculate the dose reductions (in water) in air-water interface region for single, parallel opposed and four field irradiations with 2x2 cm(2) (beamlet), 10x2 cm(2) (fan beam), 5x5 and 7x7 cm(2) field sizes. The magnitude of dose reduction in water near air-water interface increases with photon energy; decreases with distance from the interface as well as decreases as the number of beams are increased. No dose reductions were observed for large field sizes encompassing the air cavities. The results demonstrate that Co-60 beams may provide significantly smaller interface dose reductions than 6 MV and 15 MV irradiations for small field irradiations such as used in IMRT and tomotherapy. PMID:20589116
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
Monte Carlo simulation of vapor transport in physical vapor deposition of titanium
Balakrishnan, Jitendra; Boyd, Iain D.; Braun, David G.
2000-05-01
In this work, the direct simulation Monte Carlo (DSMC) method is used to model the physical vapor deposition of titanium using electron-beam evaporation. Titanium atoms are vaporized from a molten pool at a very high temperature and are accelerated collisionally to the deposition surface. The electronic excitation of the vapor is significant at the temperatures of interest. Energy transfer between the electronic and translational modes of energy affects the flow significantly. The electronic energy is modeled in the DSMC method and comparisons are made between simulations in which electronic energy is excluded from and included among the energy modes of particles. The experimentally measured deposition profile is also compared to the results of the simulations. It is concluded that electronic energy is an important factor to consider in the modeling of flows of this nature. The simulation results show good agreement with experimental data. (c) 2000 American Vacuum Society.
Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.
NASA Astrophysics Data System (ADS)
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
Correlated two-photon transport in a one-dimensional waveguide side-coupled to a nonlinear cavity
Liao Jieqiao; Law, C. K.
2010-11-15
We investigate the transport properties of two photons inside a one-dimensional waveguide side-coupled to a single-mode nonlinear cavity. The cavity is filled with a nonlinear Kerr medium. Based on the Laplace transform method, we present an analytic solution for the quantum states of the two transmitted and reflected photons, which are initially prepared in a Lorentzian wave packet. The solution reveals how quantum correlation between the two photons emerges after the scattering by the nonlinear cavity. In particular, we show that the output wave function of the two photons in position space can be localized in relative coordinates, which is a feature that might be interpreted as a two-photon bound state in this waveguide-cavity system.
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up
Lee, C; Badal, A
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
On the use of Monte Carlo simulations to model transport of positrons in gases and liquids.
Petrović, Zoran Lj; Marjanović, Srdjan; Dujko, Saša; Banković, Ana; Malović, Gordana; Buckman, Stephen; Garcia, Gustavo; White, Ron; Brunger, Michael
2014-01-01
In this paper we make a parallel between the swarm method in physics of ionized gases and modeling of positrons in radiation therapy and diagnostics. The basic idea is to take advantage of the experience gained in the past with electron swarms and to use it in establishing procedures of modeling positron diagnostics and therapy based on the well-established experimental binary collision data. In doing so we discuss the application of Monte Carlo technique for positrons in the same manner as used previously for electron swarms, we discuss the role of complete cross section sets (complete in terms of number, momentum and energy balance and tested against measured swarm parameters), we discuss the role of benchmarks and how to choose benchmarks for electrons that may perhaps be a subject to experimental verification. Finally we show some samples of positron trajectories together with secondary electrons that were established solely on the basis of accurate binary cross sections and also how those may be used in modeling of both gas filled traps and living organisms. PMID:23466009
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
NASA Astrophysics Data System (ADS)
Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam
2015-09-01
Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.
Yani, Sitti; Dirgayussa, I Gde E.; Haryanto, Freddy; Arif, Idam; Rhani, Moh. Fadhillah
2015-09-30
Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm{sup 3}, 1 × 1 × 0.5 cm{sup 3}, and 1 × 1 × 0.8 cm{sup 3}. The 1 × 10{sup 9} histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in d{sub max} from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm{sup 3} about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm{sup 3} about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.
NASA Astrophysics Data System (ADS)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
Self-Adjoint Angular Flux Equation for Coupled Electron-Photon Transport
Liscum-Powell, J.L.; Lorence, L.J. Jr.; Morel, J.E.; Prinja, A.K.
1999-07-08
Recently, Morel and McGhee described an alternate second-order form of the transport equation called the self adjoint angular flux (SAAF) equation that has the angular flux as its unknown. The SAAF formulation has all the advantages of the traditional even- and odd-parity self-adjoint equations, with the added advantages that it yields the full angular flux when it is numerically solved, it is significantly easier to implement reflective and reflective-like boundary conditions, and in the appropriate form it can be solved in void regions. The SAAF equation has the disadvantage that the angular domain is the full unit sphere and, like the even- and odd- parity form, S{sub n} source iteration cannot be implemented using the standard sweeping algorithm. Also, problems arise in pure scattering media. Morel and McGhee demonstrated the efficacy of the SAAF formulation for neutral particle transport. Here we apply the SAAF formulation to coupled electron-photon transport problems using multigroup cross-sections from the CEPXS code and S{sub n} discretization.
A POD reduced order model for resolving angular direction in neutron/photon transport problems
Buchan, A.G.; Calloo, A.A.; Goffin, M.G.; Dargaville, S.; Fang, F.; Pain, C.C.; Navon, I.M.
2015-09-01
This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
NASA Astrophysics Data System (ADS)
García-García, J.; Martín, F.; Oriols, X.; Suñé, J.
Because of its high switching speed, low power consumption and reduced complexity to implement a given function, resonant tunneling diodes (RTD's) have been recently recognized as excellent candidates for digital circuit applications [1]. Device modeling and simulation is thus important, not only to understand mesoscopic transport properties, but also to provide guidance in optimal device design and fabrication. Several approaches have been used to this end. Among kinetic models, those based on the non-equilibrium Green function formalism [2] have gained increasing interest due to their ability to incorporate coherent and incoherent interactions in a unified formulation. The Wigner distribution function approach has been also extensively used to study quantum transport in RTD's [3-6]. The main limitations of this formulation are the semiclassical treatment of carrier-phonon interactions by means of the relaxation time approximation and the huge computational burden associated to the self-consistent solution of Liouville and Poisson equations. This has imposed severe limitations on spatial domains, these being too small to succeed in the development of reliable simulation tools. Based on the Wigner function approach, we have developed a simulation tool that allows to extend the simulation domains up to hundreds of nanometers without a significant increase in computer time [7]. This tool is based on the coupling between the Wigner distribution function (quantum Liouville equation) and the Boltzmann transport equation. The former is applied to the active region of the device including the double barrier, where quantum effects are present (quantum window, QW). The latter is solved by means of a Monte Carlo algorithm and applied to the outer regions of the device, where quantum effects are not expected to occur. Since the classical Monte Carlo algorithm is much less time consuming than the discretized version of the Wigner transport equation, we can considerably
Monte Carlo Simulation of Quantum Transport in Semiconductors Using Wigner Paths
NASA Astrophysics Data System (ADS)
Bertoni, A.; García-García, J.; Bordone, P.; Brunetti, R.; Jacoboni, C.
Charge transport in mesoscopic semiconductor systems must be analyzed in terms of a quantum theory since nowadays typical dimensions of the physical structures are comparable with the electron coherence length. Theoretical approaches based on fully quantum mechanical grounds have been developed in the last decade with the purpose of analyzing the quantum electron-phonon interaction in electron transport. The Wigner function (WF) formalism is particularly suitable for the analysis of mesoscopic structures owing to its phase-space formulation that allows a natural treatment of space dependent problems with given boundary conditions. The Hamiltonian describing the system is [1] {H}=-frac{hbar^2}{2m}nabla^2 +sum_qb... ...iqr} ) +V(r) +eE\\cdot r
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A Monte Carlo Code for Relativistic Radiation Transport around Kerr Black Holes
NASA Astrophysics Data System (ADS)
Schnittman, Jeremy D.; Krolik, Julian H.
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
NASA Astrophysics Data System (ADS)
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
Correlated Cooper pair transport and microwave photon emission in the dynamical Coulomb blockade
NASA Astrophysics Data System (ADS)
Leppäkangas, Juha; Fogelström, Mikael; Marthaler, Michael; Johansson, Göran
2016-01-01
We study theoretically electromagnetic radiation emitted by inelastic Cooper-pair tunneling. We consider a dc-voltage-biased superconducting transmission line terminated by a Josephson junction. We show that the generated continuous-mode electromagnetic field can be expressed as a function of the time-dependent current across the Josephson junction. The leading-order expansion in the tunneling coupling, similar to the P (E ) theory, has previously been used to investigate the photon emission statistics in the limit of sequential (independent) Cooper-pair tunneling. By explicitly evaluating the system characteristics up to the fourth order in the tunneling coupling, we account for dynamics between consecutively tunneling Cooper pairs. Within this approach we investigate how temporal correlations in the charge transport can be seen in the first- and second-order coherences of the emitted microwave radiation.
Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh
Drumm, C.R.; Lorenz, J.
1999-03-02
A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers.
NASA Astrophysics Data System (ADS)
Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung
2012-08-01
The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is
NASA Astrophysics Data System (ADS)
Jaradat, Adnan Khalaf
The x ray leakage from the housing of a therapy x ray source is regulated to be <0.1% of the useful beam exposure at a distance of 1 m from the source. The x ray leakage in the backward direction has been measured from linacs operating at 4, 6, 10, 15, and 18 MV using a 100 cm3 ionization chamber and track-etch detectors. The leakage was measured at nine different positions over the rear wall using a 3 x 3 matrix with a 1 m separation between adjacent positions. In general, the leakage was less than the canonical value, but the exact value depends on energy, gantry angle, and measurement position. Leakage at 10 MV for some positions exceeded 0.1%. Electrons with energy greater than about 9 MeV have the ability to produce neutrons. Neutron leakage has been measured around the head of electron accelerators at a distance 1 m from the target at 0°, 46°, 90°, 135°, and 180° azimuthal angles; for electron energies of 9, 12, 15, 16, 18, and 20 MeV and 10, 15, and 18 MV x ray photon beam, using a neutron bubble detector of type BD-PND and using Track-Etch detectors. The highest neutron dose equivalent per unit electron dose was at 0° for all electron energies. The neutron leakage from photon beams was the highest between all the machines. Intensity modulated radiation therapy (IMRT) delivery consists of a summation of small beamlets having different weights that make up each field. A linear accelerator room designed exclusively for IMRT use would require different, probably lower, tenth value layers (TVL) for determining the required wall thicknesses for the primary barriers. The first, second, and third TVL of 60Co gamma rays and photons from 4, 6, 10, 15, and 18 MV x ray beams by concrete have been determined and modeled using a Monte Carlo technique (MCNP version 4C2) for cone beams of half-opening angles of 0°, 3°, 6°, 9°, 12°, and 14°.
Mosleh-Shirazi, Mohammad Amin; Karbasi, Sareh; Shahbazi-Gahrouei, Daryoush; Monadi, Shahram
2012-01-01
Full buildup diodes can cause significant dose perturbation if they are used on most or all of radiotherapy fractions. Given the importance of frequent in vivo measurements in complex treatments, using thin buildup (low-perturbation) diodes instead is gathering interest. However, such diodes are strictly unsuitable for high-energy photons; therefore, their use requires evaluation and careful measurement of correction factors (CFs). There is little published data on such factors for low-perturbation diodes, and none on diode characterization for 9 MV X-rays. We report on MCNP4c Monte Carlo models of low-perturbation (EDD5) and medium-perturbation (EDP10) diodes, and a comparison of source-to-surface distance, field size, temperature, and orientation CFs for cobalt-60 and 9 MV beams. Most of the simulation results were within 4% of the measurements. The results suggest against the use of the EDD5 in axial angles beyond ± 50° and exceeding the range 0° to +50° tilt angle at 9 MV. Outside these ranges, although the EDD5 can be used for accurate in vivo dosimetry at 9 MV, its CF variations were found to be 1.5-7.1 times larger than the EDP10 and, therefore, should be applied carefully. Finally, the MCNP diode models are sufficiently reliable tools for independent verification of potentially inaccurate measurements. PMID:23149783
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2015-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992
Monte Carlo modeling of the spatially dispersive carrier transport in P3HT and P3HT:PCBM blends
NASA Astrophysics Data System (ADS)
Jiang, Xin
2009-10-01
The presence of traps, arising from morpohological or chemical defects, can be critical to the performance of organic semiconductor devices. Traps can reduce the charge carrier mobility, disturb the internal electrical field, drive recombination, and reduce the overall device efficiency as well as operational stability. In this work, we investigate the role of traps in determining charge transport properties of organic semiconductors and blends such as P3HT and P3HT:PCBM through Monte-Carlo (MC) simulations in conjunction with time-of-flight (TOF) mobility measurements. We employ a Marcus theory description of individual hopping events based on the molecular reorganization energy (lambda) for the MC simulations. Trap states are modeled as diffuse bands that reside at some energy away from the main transport band. This model is used to simulate TOF transients, and the results are compared to experimental data. As is expected from the Marcus theory equation, the mobility is seen to be maximum for an optimal value of lambda. This optimal value is strongly field dependent, but is found to be independent of the trap density. In comparing MC simulations with TOF data, it is found that inclusion of traps results in a much better fit to the data and provides for a mechanism to simulate dispersive transport with a long tail resulting from trapping and detrapping of carriers before they exit the device. We present results for a range of trap densities and statistical distributions and discuss the implications on the operation of bulk heterojunction organic photovoltaic devices.
Coupled Deterministic/Monte Carlo Simulation of Radiation Transport and Detector Response
Gesh, Christopher J.; Meriwether, George H.; Pagh, Richard T.; Smith, Leon E.
2005-09-01
The analysis of radiation sensor systems used to detect and identify nuclear and radiological weapons materials requires detailed radiation transport calculations. Two basic steps are required to solve radiation detection scenario analysis (RDSA) problems. First, the radiation field produced by the source must be calculated. Second, the response that the radiation field produces in a detector must be determined. RDSA problems are characterized by complex geometries, the presence of shielding materials, and large amounts of scattering (or absorption/re-emission). In this paper, we will discuss the use of the Attila code [2] for RDSA.
NASA Astrophysics Data System (ADS)
Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang
2011-07-01
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
Chin, P.W. . E-mail: mary.chin@physics.org
2005-10-15
This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
NASA Astrophysics Data System (ADS)
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
Rhoades, W.A.; Simpson, D.B.
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
A simplified spherical harmonic method for coupled electron-photon transport calculations
Josef, J.A.
1997-12-01
In this thesis the author has developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. He has performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. The theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. In addition, he has applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well as for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. The author has investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems.
Cooper, M.A.
2000-07-03
We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
NASA Astrophysics Data System (ADS)
Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio
2015-09-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.
Douglass, Michael; Bezak, Eva; Penfold, Scott
2013-07-15
Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Nadamoto, Ken; Okada, Eiji; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-03-01
Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.
Kinetic Monte Carlo model of charge transport in hematite ({alpha}-Fe{sub 2}O{sub 3})
Kerisit, Sebastien; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parametrized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., {approx}2% of iron(III) sites, whereas the
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
Angular biasing in implicit Monte-Carlo
Zimmerman, G.B.
1994-10-20
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.
Overview of the MCU Monte Carlo Software Package
NASA Astrophysics Data System (ADS)
Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.
2014-06-01
MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A
2014-01-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839
NASA Astrophysics Data System (ADS)
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
Koo, Brian T; Berard, Philip G; Clancy, Paulette
2015-03-10
Two-dimensional covalent organic frameworks (COFs), with their predictable assembly into ordered porous crystalline materials, tunable composition, and high charge carrier mobility, offer the possibility of creating ordered bulk heterojunction solar cells given a suitable electron-transporting material to fill the pores. The photoconductive (hole-transporting) properties of many COFs have been reported, including the recent creation of a TT-COF/PCBM solar cell by Dogru et al. Although a prototype device has been fabricated, its poor solar efficiency suggests a potential issue with electron transport caused by the interior packing of the fullerenes. Such packing information is absent and cannot be obtained experimentally. In this paper, we use Kinetic Monte Carlo (KMC) simulations to understand the dominant pore-filling mechanisms and packing configurations of C60 molecules in a Pc-PBBA COF that are similar to the COF fabricated experimentally. The KMC simulations thus offer more realistic filling conditions than our previously used Monte Carlo (MC) techniques. We found persistently large separation distances between C60 molecules that are absent in the more tractable MC simulations and which are likely to hinder electron transport significantly. We attribute the looser fullerene packing to the existence of stable motifs with pairwise distances that are mismatched with the underlying adsorption lattice of the COF. We conclude that larger pore COFs may be necessary to optimize electron transport and hence produce higher efficiency devices. PMID:26579766
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
NASA Astrophysics Data System (ADS)
Chishti, Sabiq; Ghosh, Bahniman; Bishnoi, Bhupesh
2015-02-01
We have analyzed the spin transport behaviour of four II-VI semiconductor nanowires by simulating spin polarized transport using a semi-classical Monte-Carlo approach. The different scattering mechanisms considered are acoustic phonon scattering, surface roughness scattering, polar optical phonon scattering, and spin flip scattering. The II-VI materials used in our study are CdS, CdSe, ZnO and ZnS. The spin transport behaviour is first studied by varying the temperature (4-500 K) at a fixed diameter of 10 nm and also by varying the diameter (8-12 nm) at a fixed temperature of 300 K. For II-VI compounds, the dominant mechanism is for spin relaxation; D'yakonovPerel and Elliot Yafet have been actively employed in the first order model to simulate the spin transport. The dependence of the spin relaxation length (SRL) on the diameter and temperature has been analyzed.
Ueki, T.; Larsen, E.W.
1998-09-01
The authors show that Monte Carlo simulations of neutral particle transport in planargeometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new Boltzman Monte Carlo (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (1) using an angular biasing function that is closely related to the ``asymptotic`` solution of the linear Boltzman equation and (2) requiring isotropic weight changes as collisions, they derive a new angular biasing scheme. Using the BMC equation, they propose a universal ``safe`` upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, they demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on
Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D
2016-07-21
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme. PMID:27353090
NASA Astrophysics Data System (ADS)
Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.
2016-07-01
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%–0 mm and a 2%–0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%–0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
Transport and Anderson localization in disordered two-dimensional photonic lattices.
Schwartz, Tal; Bartal, Guy; Fishman, Shmuel; Segev, Mordechai
2007-03-01
One of the most interesting phenomena in solid-state physics is Anderson localization, which predicts that an electron may become immobile when placed in a disordered lattice. The origin of localization is interference between multiple scatterings of the electron by random defects in the potential, altering the eigenmodes from being extended (Bloch waves) to exponentially localized. As a result, the material is transformed from a conductor to an insulator. Anderson's work dates back to 1958, yet strong localization has never been observed in atomic crystals, because localization occurs only if the potential (the periodic lattice and the fluctuations superimposed on it) is time-independent. However, in atomic crystals important deviations from the Anderson model always occur, because of thermally excited phonons and electron-electron interactions. Realizing that Anderson localization is a wave phenomenon relying on interference, these concepts were extended to optics. Indeed, both weak and strong localization effects were experimentally demonstrated, traditionally by studying the transmission properties of randomly distributed optical scatterers (typically suspensions or powders of dielectric materials). However, in these studies the potential was fully random, rather than being 'frozen' fluctuations on a periodic potential, as the Anderson model assumes. Here we report the experimental observation of Anderson localization in a perturbed periodic potential: the transverse localization of light caused by random fluctuations on a two-dimensional photonic lattice. We demonstrate how ballistic transport becomes diffusive in the presence of disorder, and that crossover to Anderson localization occurs at a higher level of disorder. Finally, we study how nonlinearities affect Anderson localization. As Anderson localization is a universal phenomenon, the ideas presented here could also be implemented in other systems (for example, matter waves), thereby making it feasible
NASA Astrophysics Data System (ADS)
Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.
2010-04-01
Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.
Automated variance reduction for Monte Carlo shielding analyses with MCNP
NASA Astrophysics Data System (ADS)
Radulescu, Georgeta
Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.
NASA Astrophysics Data System (ADS)
Sundaresan, Sasi; Jayasekera, Thushari; Ahmed, Shaikh
2014-03-01
Monte Carlo based statistical approach to solve Boltzmann Transport Equation (BTE) has become a norm to investigate heat transport in semiconductors at sub-micron regime, owing to its ability to characterize realistically sized device geometries qualitatively. One weakness of this technique is that the approach predominantly uses empirically fitted phonon dispersion relation as input to determine the properties of phonons and predict the thermal conductivity for a specified material geometry. The empirically fitted dispersion relations assume harmonic approximation, thereby failing to account for thermal expansion, effects of strain on spring stiffness, and accurate phonon-phonon interactions. To account for the anharmonic contributions in the calculation of thermal conductivity, in this work, we employ a coupled molecular mechanics-Monte Carlo (MM-MC) approach. The atomistically-resolved non-deterministic approach adopted in this work is found to produce satisfactory results on heat transport and thermal conductivity in both ballistic and diffusive regimes for III-N nanostructures. Supported by the U.S. National Science Foundation Grant No. CCF-1218839.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
MCNP/X TRANSPORT IN THE TABULAR REGIME
HUGHES, H. GRADY
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
NASA Astrophysics Data System (ADS)
Lee, Young-Jin; Park, Su-Jin; Lee, Seung-Wan; Kim, Dae-Hong; Kim, Ye-Seul; Kim, Hee-Joung
2013-05-01
The photon counting detector based on cadmium telluride (CdTe) or cadmium zinc telluride (CZT) is a promising imaging modality that provides many benefits compared to conventional scintillation detectors. By using a pinhole collimator with the photon counting detector, we were able to improve both the spatial resolution and the sensitivity. The purpose of this study was to evaluate the photon counting and conventional scintillation detectors in a pinhole single-photon emission computed tomography (SPECT) system. We designed five pinhole SPECT systems of two types: one type with a CdTe photon counting detector and the other with a conventional NaI(Tl) scintillation detector. We conducted simulation studies and evaluated imaging performance. The results demonstrated that the spatial resolution of the CdTe photon counting detector was 0.38 mm, with a sensitivity 1.40 times greater than that of a conventional NaI(Tl) scintillation detector for the same detector thickness. Also, the average scatter fractions of the CdTe photon counting and the conventional NaI(Tl) scintillation detectors were 1.93% and 2.44%, respectively. In conclusion, we successfully evaluated various pinhole SPECT systems for small animal imaging.
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Anfinrud, P.A.; Hart, D.E.; Hedstrom, J.F.; Struve, W.S.
1986-05-22
Time-correlated photon counting has been used to measure fluorescence concentration depolarization for rhodamine 6G in glycerol. The excitation transport theory developed by Gochanour, Andersen, and Fayer yields good approximations to the experimental decay profiles over the concentration range 1.7 x 10/sup -4/ to 2.4 x 10/sup -3/ M. Although the differences between optimized theoretical and experimental profiles are fractionally small, they are readily characterized under present counting statistics. They prove to be dominated by experimental artifacts, arising from excitation trapping by rhodamine 6G aggregates and from self-absorption in solution cells thicker than approx. 10 ..mu..m.
An Electron/Photon/Relaxation Data Library for MCNP6
Hughes, III, H. Grady
2015-08-07
The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.
NASA Astrophysics Data System (ADS)
Hespeels, F.; Tonneau, R.; Ikeda, T.; Lucas, S.
2015-11-01
This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 μm microbeam.
Johnson, J.O.
2000-10-23
The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
NASA Astrophysics Data System (ADS)
Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.
2015-11-01
Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.
NASA Astrophysics Data System (ADS)
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte