Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)
Kirk, B.L.; West, J.T.
1984-06-01
The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided.
Photons, Electrons and Positrons Transport in 3D by Monte Carlo Techniques
2014-12-01
Version 04 FOTELP-2014 is a new compact general purpose version of the previous FOTELP-2K6 code designed to simulate the transport of photons, electrons and positrons through three-dimensional material and sources geometry by Monte Carlo techniques, using subroutine package PENGEOM from the PENELOPE code under Linux-based and Windows OS. This new version includes routine ELMAG for electron and positron transport simulation in electric and magnetic fields, RESUME option and routine TIMER for obtaining starting random number and for measuring the time of simulation.
NASA Astrophysics Data System (ADS)
Jacqmin, Dustin J.
Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
Chi, Y; Tian, Z; Jiang, S; Jia, X
2015-06-15
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
2015-01-01
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Space applications of the MITS electron-photon Monte Carlo transport code system
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Code System for Monte Carlo Simulation of Electron and Photon Transport.
2015-07-01
Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which hard events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while soft interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.
Monte Carlo simulation of photon transport in a randomly oriented sphere-cylinder scattering medium
NASA Astrophysics Data System (ADS)
Linder, T.; Löfqvist, T.
2011-11-01
A Monte Carlo simulation tool for simulating photon transport in a randomly oriented sphere-cylinder medium has been developed. The simulated medium represents a paper pulp suspension where the constituents are assumed to be mono-disperse micro-spheres, representing dispersed fiber fragments, and infinitely long, straight, randomly oriented cylinders representing fibers. The diameter of the micro-spheres is considered to be about the order of the wavelength and is described by Mie scattering theory. The fiber diameter is considerably larger than the wavelength and the photon scattering is therefore determined by an analytical solution of Maxwell's equation for scattering at an infinitely long cylinder. By employing a Stokes-Mueller formalism, the software tracks the polarization of the light while propagating through the medium. The effects of varying volume concentrations and sizes of the scattering components on reflection, transmission and polarization of the incident light are investigated. It is shown that not only the size but also the shape of the particles has a big impact on the depolarization.
Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.
VALDEZ, GREG D.
2012-11-30
Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
NASA Astrophysics Data System (ADS)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
Cullen, D.E
2000-11-22
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
NASA Astrophysics Data System (ADS)
Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun
2012-05-01
Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.
Su, L.; Du, X.; Liu, T.; Xu, X. G.
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Behin-Ain, S; van Doorn, T; Patterson, J R
2002-02-01
A time-resolved indeterministic Monte Carlo (IMC) simulation technique is proposed for the efficient construction of the early part of the temporal point spread function (TPSF) of visible or near infrared photons transmitted through an optically thick scattering medium. By assuming a detected photon is a superposition of photon components, the photon is repropagated from a point in the original path where a significant delay in forward propagation occurred. A weight is then associated with each subsequently detected photon to compensate for shorter components. The technique is shown to reduce the computation time by a factor of at least 4 when simulating the sub-200 picosecond region of the TPSF and hence provides a useful tool for analysis of single photon detection in transillumination imaging.
Monte Carlo Particle Transport: Algorithm and Performance Overview
Gentile, N; Procassini, R; Scott, H
2005-06-02
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.
Moon, Seyoung; Kim, Donghyun; Sim, Eunji
2008-01-20
We employ a Monte Carlo (MC) algorithm to investigate the decoherence of diffuse photons in turbid media. For the MC simulation of coherent photons, the degree of coherence, defined as a random variable for a photon packet, is associated with a decoherence function that depends on the scattering angle and is updated as a photon interacts with a medium via scattering. Using a slab model, the effects of medium scattering properties were studied, which reveals that a linear random variable model for the degree of coherence is in better agreement with experimental results than a sinusoidal model and that decoherence is quick for the initial few scattering events followed by a slow and gradual decrease of coherence.
White, Morgan C.
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Continuous Energy Photon Transport Implementation in MCATK
Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed; Nolen, Steven Douglas; Hughes, Henry Grady; Pritchett-Sheats, Lori A.; Werner, Christopher John
2016-10-31
The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.
Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code
Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.
2013-07-01
Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)
Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X
2015-06-15
Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross
Xu, Y; Tian, Z; Jiang, S; Jia, X; Zhou, L
2015-06-15
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source. After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
NAGAYA, YASANOBU
2008-02-29
Version 00 (1) Problems to be solved: MVP/GMVP II can solve eigenvalue and fixed-source problems. The multigroup code GMVP can solve forward and adjoint problems for neutron, photon and neutron-photon coupled transport. The continuous-energy code MVP can solve only the forward problems. Both codes can also perform time-dependent calculations. (2) Geometry description: MVP/GMVP employs combinatorial geometry to describe the calculation geometry. It describes spatial regions by the combination of the 3-dimensional objects (BODIes). Currently, the following objects (BODIes) can be used. - BODIes with linear surfaces : half space, parallelepiped, right parallelepiped, wedge, right hexagonal prism - BODIes with quadratic surface and linear surfaces : cylinder, sphere, truncated right cone, truncated elliptic cone, ellipsoid by rotation, general ellipsoid - Arbitrary quadratic surface and torus The rectangular and hexagonal lattice geometry can be used to describe the repeated geometry. Furthermore, the statistical geometry model is available to treat coated fuel particles or pebbles for high temperature reactors. (3) Particle sources: The various forms of energy-, angle-, space- and time-dependent distribution functions can be specified. See Abstract for more detail.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Challenges of Monte Carlo Transport
Long, Alex Roberts
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Coupled electron-photon radiation transport
Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.
2000-01-17
Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
Trahan, Travis J.; Gentile, Nicholas A.
2012-09-10
Statistical uncertainty is inherent to any Monte Carlo simulation of radiation transport problems. In space-angle-frequency independent radiative transfer calculations, the uncertainty in the solution is entirely due to random sampling of source photon emission times. We have developed a modification to the Implicit Monte Carlo algorithm that eliminates noise due to sampling of the emission time of source photons. In problems that are independent of space, angle, and energy, the new algorithm generates a smooth solution, while a standard implicit Monte Carlo solution is noisy. For space- and angle-dependent problems, the new algorithm exhibits reduced noise relative to standard implicit Monte Carlo in some cases, and comparable noise in all other cases. In conclusion, the improvements are limited to short time scales; over long time scales, noise due to random sampling of spatial and angular variables tends to dominate the noise reduction from the new algorithm.
Monte Carlo simulation of photon scattering in biological tissue models.
Kumar, D; Chacko, S; Singh, M
1999-10-01
Monte Carlo simulation of photon scattering, with and without abnormal tissue placed at various locations in the rectangular, semi-circular and semi-elliptical tissue models, has been carried out. The absorption coefficient of the tissue considered as abnormal is high and its scattering coefficient low compared to that of the control tissue. The placement of the abnormality at various locations within the models affects the transmission and surface emission of photons at various locations. The scattered photons originating from deeper layers make the maximum contribution at farther distances from the beam entry point. The contribution of various layers to photon scattering provides valuable data on variability of internal composition. Introduction.
An efficient framework for photon Monte Carlo treatment planning.
Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J
2007-10-07
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems
NASA Astrophysics Data System (ADS)
Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc
2015-11-01
The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
West, J.T.; Murphy, J.
1988-01-01
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
Photon beam description in PEREGRINE for Monte Carlo dose calculations
Cox, L. J., LLNL
1997-03-04
Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.
Dark Photon Monte Carlo at SeaQuest
NASA Astrophysics Data System (ADS)
Hicks, Caleb; SeaQuest/E906 Collaboration
2016-09-01
Fermi National Laboratory's E906/SeaQuest is an experiment primarily designed to study the ratio of anti-down to anti-up quarks in the nucleon quark sea as a function of Bjorken x. SeaQuest's measurement is obtained by measuring the muon pairs produced by the Drell-Yan process. The experiment can also search for muon pair vertices past the target and beam dump, which would be a signature of Dark Photon decay. It is therefore necessary to run Monte Carlo simulations to determine how a changed Z vertex affects the detection and distribution of muon pairs using SeaQuest's detectors. SeaQuest has an existing Monte Carlo program that has been used for simulations of the Drell-Yan process as well as J/psi decay and other processes. The Monte Carlo program was modified to use a fixed Z vertex when generating muon pairs. Events were then generated with varying Z vertices and the resulting simulations were then analyzed. This work is focuses on the results of the Monte Carlo simulations and the effects on Dark Photon detection. This research was supported by US DOE MENP Grant DE-FG02-03ER41243.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes.
Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study
Zhang, Ying; Feng, Yuanming; Ming, Xin
2016-01-01
A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413
Monte Carlo simulation for the transport beamline
NASA Astrophysics Data System (ADS)
Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.
2013-07-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Single photon transport by a moving atom
NASA Astrophysics Data System (ADS)
Afanasiev, A. E.; Melentiev, P. N.; Kuzin, A. A.; Kalatskiy, A. Yu; Balykin, V. I.
2017-01-01
The results of investigation of photon transport through the subwavelength hole in the opaque screen by using single neutral atom are represented. The basis of the proposed and implemented method is the absorption of a photon by a neutral atom immediately before the subwavelength aperture, traveling of the atoms through the hole and emission of a photon on the other side of the screen. Realized method is the alternative approach to existing for photon transport through a subwavelength aperture: 1) self-sustained transmittance of a photon through the aperture according to the Bethe’s model; 2) extra ordinary transmission because of surface-plasmon excitation.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Diffuse photon density wave measurements and Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.
2015-10-01
Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.
Diffuse photon density wave measurements and Monte Carlo simulations.
Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A
2015-10-01
Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Monte Carlo Estimate to Improve Photon Energy Spectrum Reconstruction
NASA Astrophysics Data System (ADS)
Sawchuk, S.
Improvements to planning radiation treatment for cancer patients and quality control of medical linear accelerators (linacs) can be achieved with the explicit knowledge of the photon energy spectrum. Monte Carlo (MC) simulations of linac treatment heads and experimental attenuation analysis are among the most popular ways of obtaining these spectra. Attenuation methods which combine measurements under narrow beam geometry and the associated calculation techniques to reconstruct the spectrum from the acquired data are very practical in a clinical setting and they can also serve to validate MC simulations. A novel reconstruction method [1] which has been modified [2] utilizes a Simpson's rule (SR) to approximate and discretize (1)
Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
Performance of three-photon PET imaging: Monte Carlo simulations.
Kacperski, Krzysztof; Spyrou, Nicholas M
2005-12-07
We have recently introduced the idea of making use of three-photon positron annihilations in positron emission tomography. In this paper, the basic characteristics of the three-gamma imaging in PET are studied by means of Monte Carlo simulations and analytical computations. Two typical configurations of human and small animal scanners are considered. Three-photon imaging requires high-energy resolution detectors. Parameters currently attainable by CdZnTe semiconductor detectors, the technology of choice for the future development of radiation imaging, are assumed. Spatial resolution is calculated as a function of detector energy resolution and size, position in the field of view, scanner size and the energies of the three-gamma annihilation photons. Possible ways to improve the spatial resolution obtained for nominal parameters, 1.5 cm and 3.2 mm FWHM for human and small animal scanners, respectively, are indicated. Counting rates of true and random three-photon events for typical human and small animal scanning configurations are assessed. A simple formula for minimum size of lesions detectable in the three-gamma based images is derived. Depending on the contrast and total number of registered counts, lesions of a few mm size for human and sub mm for small animal scanners can be detected.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
Fano interference in two-photon transport
NASA Astrophysics Data System (ADS)
Xu, Shanshan; Fan, Shanhui
2016-10-01
We present a general input-output formalism for the few-photon transport in multiple waveguide channels coupled to a local cavity. Using this formalism, we study the effect of Fano interference in two-photon quantum transport. We show that the physics of Fano interference can manifest as an asymmetric spectral line shape in the frequency dependence of the two-photon correlation function. The two-photon fluorescence spectrum, on the other hand, does not exhibit the physics of Fano interference.
Fiber transport of spatially entangled photons
NASA Astrophysics Data System (ADS)
Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
Monte Carlo Analysis of Quantum Transport and Fluctuations in Semiconductors.
1986-02-18
methods to quantum transport within the Liouville formulation. The second part concerns with fluctuations of carrier velocities and energies both in...interactions) on the transport properties. Keywords: Monte Carlo; Charge Transport; Quantum Transport ; Fluctuations; Semiconductor Physics; Master Equation...The present report contains technical matter related to the research performed on two different subjects. The first part concerns with quantum
Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G
2008-01-01
The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%.
Transport of photons produced by lightning in clouds
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard
1991-01-01
The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.
Pan, Tianshu; Rasmussen, John C; Lee, Jae Hoon; Sevick-Muraca, Eva M
2007-04-01
Recently, we have presented and experimentally validated a unique numerical solver of the coupled radiative transfer equations (RTEs) for rapidly computing time-dependent excitation and fluorescent light propagation in small animal tomography. Herein, we present a time-dependent Monte Carlo algorithm to validate the forward RTE solver and investigate the impact of physical parameters upon transport-limited measurements in order to best direct the development of the RTE solver for optical tomography. Experimentally, the Monte Carlo simulations for both transport-limited and diffusion-limited propagations are validated using frequency domain photon migration measurements for 1.0%, 0.5%, and 0.2% intralipid solutions containing 1 microM indocyanine green in a 49 cm3 cylindrical phantom corresponding to the small volume employed in small animal tomography. The comparisons between Monte Carlo simulations and the numerical solutions result in mean percent error in amplitude and the phase shift less than 5.0% and 0.7 degrees, respectively, at excitation and emission wavelengths for varying anisotropic factors, lifetimes, and modulation frequencies. Monte Carlo simulations indicate that the accuracy of the forward model is enhanced using (i) suitable source models of photon delivery, (ii) accurate anisotropic factors, and (iii) accurate acceptance angles of collected photons. Monte Carlo simulations also show that the accuracy of the diffusion approximation in the small phantom depends upon (i) the ratio d(phantom)/l(tr), where d(phantom) is the phantom diameter and l(tr) is the transport mean free path; and (ii) the anisotropic factor of the medium. The Monte Carlo simulations validates and guides the future development of an appropriate RTE solver for deployment in small animal optical tomography.
Gan, X; Gu, M
2000-04-01
Three-dimensional fluorescence spatial distributions under single-photon and two-photon excitation within a turbid medium are studied with Monte Carlo simulation. It is demonstrated that two-photon excitation has an advantage of producing much less fluorescence light outside the focal region compared with single-photon excitation. With the increase of the concentration of scattering particles in a turbid medium, the position of the maximum fluorescence intensity point shifts from the geometric focal region toward the medium surface. Further studies show that the optical sectioning property of two-photon fluorescence microscopy is degraded in thick turbid media or when the numerical aperture of an objective becomes low.
Photonic sensor applications in transportation security
NASA Astrophysics Data System (ADS)
Krohn, David A.
2007-09-01
There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I Gde Eka Yani, Sitti; Haryanto, Freddy; Rhani, M. Fahdillah
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good
Alerstam, Erik; Svensson, Tomas; Andersson-Engels, Stefan
2008-01-01
General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.
Cheung, Joel Y.C.; Yu, K.N.
2006-01-15
In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the {sup 60}Co source and radiated isotropically in a solid angle of 4{pi}. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the {sup 60}Co source and the beam channel also contribute to the output spectra.
Review of Monte Carlo modeling of light transport in tissues.
Zhu, Caigang; Liu, Quan
2013-05-01
A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the
Monte carlo analysis of two-photon fluorescence imaging through a scattering medium.
Blanca, C M; Saloma, C
1998-12-01
The behavior of two-photon fluorescence imaging through a scattering medium is analyzed by use of the Monte Carlo technique. The axial and transverse distributions of the excitation photons in the focused Gaussian beam are derived for both isotropic and anisotropic scatterers at different numerical apertures and at various ratios of the scattering depth with the mean free path. The two-photon fluorescence profiles of the sample are determined from the square of the normalized excitation intensity distributions. For the same lens aperture and scattering medium, two-photon fluorescence imaging offers a sharper and less aberrated axial response than that of single-photon confocal fluorescence imaging. The contrast in the corresponding transverse fluorescence profile is also significantly higher. Also presented are results comparing the effects of isotropic and anisotropic scattering media in confocal reflection imaging. The convergence properties of the Monte Carlo simulation are also discussed.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Progress in photonic transport network systems
NASA Astrophysics Data System (ADS)
Sato, Ken-Ichi
2002-07-01
The network paradigm is changing rapidly spurred by the dramatic increase in IP traffic and recent progress in photonic network technologies. A key requirement, enhancing the performance of existing IP-based multimedia communication networks, can be most effectively achieved by introducing optical path technologies that exploit wavelength routing. Cost effective and reliable optical cross-connection is essential. Different optical switch technologies have been proposed and tested. Among them, the PLC (Planer Lightwave Circuit) switch has demonstrated excellent performance, particularly with regard to system reliability. Network control mechanisms based on the overlay and peer model models have been developed. The presentation will highlight some of the key system technologies. To develop very large scale and robust networks, effective traffic engineering capabilities are necessary. This will be achieved through optical path control. To develop future IP-centric networks, an operation mechanism based on distributed control is important. The degree to which the necessary transport and IP routing functions are integrated will determine system cost-effectiveness. The Photonic MPLS (Multi Protocol Label Switching) router, which integrates all the functions and provides seamless operation between IP and optical layers, has been proposed and developed. The technical feasibility of a recent prototype system has been proven. Finally, some of the cutting-edge photonic transport technologies that we have recently developed are demonstrated; these technologies will enable us to achieve another level of network performance enhancement in the future.
Equivalence of four Monte Carlo methods for photon migration in turbid media.
Sassaroli, Angelo; Martelli, Fabrizio
2012-10-01
In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed.
Overview and applications of the Monte Carlo radiation transport kit at LLNL
Sale, K E
1999-06-23
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
Monte Carlo simulations of electron transport in strongly attaching gases
NASA Astrophysics Data System (ADS)
Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa
2016-09-01
Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.
Monte Carlo simulations of charge transport in heterogeneous organic semiconductors
NASA Astrophysics Data System (ADS)
Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta
2015-03-01
The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Lin, Yuting; McMahon, Stephen J; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-21
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
NASA Astrophysics Data System (ADS)
Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-01
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames
Mehta, R.S.; Haworth, D.C.; Modest, M.F.
2010-05-15
A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)
Modelling 6 MV photon beams of a stereotactic radiosurgery system for Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Deng, Jun; Guerrero, Thomas; Ma, C.-M.; Nath, Ravinder
2004-05-01
The goal of this work is to build a multiple source model to represent the 6 MV photon beams from a Cyberknife stereotactic radiosurgery system for Monte Carlo treatment planning dose calculations. To achieve this goal, the 6 MV photon beams have been characterized and modelled using the EGS4/BEAM Monte Carlo system. A dual source model has been used to reconstruct the particle phase space at a plane immediately above the secondary collimator. The proposed model consists of two circular planar sources for the primary photons and the scattered photons, respectively. The dose contribution of the contaminant electrons was found to be in the order of 10-3 of the total maximum dose and therefore has been omitted in the source model. Various comparisons have been made to verify the dual source model against the full phase space simulated using the EGS4/BEAM system. The agreement in percent depth dose (PDD) curves and dose profiles between the phase space and the source model was generally within 2%/1 mm for various collimators (5 to 60 mm in diameter) at 80 to 100 cm source-to-surface distances (SSD). Excellent agreement (within 1%/1 mm) was also found between the dose distributions in heterogeneous lung and bone geometry calculated using the original phase space and those calculated using the source model. These results demonstrated the accuracy of the dual source model for Monte Carlo treatment planning dose calculations for the Cyberknife system.
Diffuse photon density wave measurements in comparison with the Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kuzmin, V. L.; Neidrauer, M. T.; Diaz, D.; Zubkov, L. A.
2015-03-01
The Diffuse Photon Density Wave (DPDW) methodology is widely used in a number of biomedical applications. Here we present results of Monte Carlo simulations that employ an effective numerical procedure, based upon a description of radiative transfer in terms of the Bethe-Salpeter equation, and compare them with measurements from Intralipid aqueous solutions. In our scheme every act of scattering contributes to the signal. We find the Monte Carlo simulations and measurements to be in a very good agreement for a wide range of source -detector separations.
NASA Astrophysics Data System (ADS)
Li, Shengfu; Chen, Guanghua; Wang, Rongbo; Luo, Zhengxiong; Peng, Qixian
2016-12-01
This paper proposes a Monte Carlo (MC) based angular distribution estimation method of multiply scattered photons for underwater imaging. This method targets on turbid waters. Our method is based on applying typical Monte Carlo ideas to the present problem by combining all the points on a spherical surface. The proposed method is validated with the numerical solution of the radiative transfer equation (RTE). The simulation results based on typical optical parameters of turbid waters show that the proposed method is effective in terms of computational speed and sensitivity.
Frankl, Matthias; Macián-Juan, Rafael
2016-03-01
The development of intensity-modulated radiotherapy treatments delivering large amounts of monitor units (MUs) recently raised concern about higher risks for secondary malignancies. In this study, optimised combinations of several variance reduction techniques (VRTs) have been implemented in order to achieve a high precision in Monte Carlo (MC) radiation transport simulations and the calculation of in- and out-of-field photon and neutron dose-equivalent distributions in an anthropomorphic phantom using MCNPX, v.2.7. The computer model included a Varian Clinac 2100C treatment head and a high-resolution head phantom. By means of the applied VRTs, a relative uncertainty for the photon dose-equivalent distribution of <1 % in-field and 15 % in average over the rest of the phantom could be obtained. Neutron dose equivalent, caused by photonuclear reactions in the linear accelerator components at photon energies of approximately >8 MeV, has been calculated. Relative uncertainty, calculated for each voxel, could be kept below 5 % in average over all voxels of the phantom. Thus, a very detailed neutron dose distribution could be obtained. The achieved precision now allows a far better estimation of both photon and especially neutron doses out-of-field, where neutrons can become the predominant component of secondary radiation.
Towards real-time photon Monte Carlo dose calculation in the cloud.
Ziegenhein, Peter; Kozin, Igor; Kamerling, Cornelis Philippus; Oelfke, Uwe
2017-01-31
Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as GPUs or clusters of central processing units (CPU)-based system. Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that forms in the cloud. Computational resources can be provisioned dynamically at low costs without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and efficiently transports data to and from the cloud. The client application integrates seamlessly into a Treatment Planning System (TPS). It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. The Advanced Encryption Standard (AES) was used to add an addition security layer which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 to 10.9 seconds for simulating a clinical prostate and liver case up to 1\\% statistical uncertainty. The computation times include the data transportation processes with the cloud as well as process scheduling and synchronisation overhead. Cloud based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.
Modeling photon transport in transabdominal fetal oximetry
NASA Astrophysics Data System (ADS)
Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton
2000-07-01
The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations
O'Brien, M; Taylor, J; Procassini, R
2004-12-22
The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations.
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of
The macro response Monte Carlo method for electron transport
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most
Monte Carlo Neutrino Transport in Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Richers, Sherwood; Dolence, Joshua; Ott, Christian
2017-01-01
Neutrino interactions dominate the energetics of core-collapse supernovae (CCSNe) and determine the composition of the matter ejected from CCSNe and gamma-ray bursts (GRBs). Three dimensional (3D) CCSN and neutron star merger simulations are rapidly improving, but still suffer from approximate treatments of neutrino transport that cripple their reliability and realism. I use my relativistic time-independent Monte Carlo neutrino transport code SEDONU to evaluate the effectiveness of leakage, moment, and discrete ordinate schemes in the context of core-collapse supernovae. I also developed a relativistic extension to the Random Walk approximation that greatly accelerates convergence in diffusive regimes, making full-domain simulations possible. Blue Waters Graduate Fellowship.
Optimization of Monte Carlo transport simulations in stochastic media
Liang, C.; Ji, W.
2012-07-01
This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)
Yuan, Luqi; Xu, Shanshan; Fan, Shanhui
2015-11-15
We show that nonreciprocal unidirectional single-photon quantum transport can be achieved with the photonic Aharonov-Bohm effect. The system consists of a 1D waveguide coupling to two three-level atoms of the V-type. The two atoms, in addition, are each driven by an external coherent field. We show that the phase of the external coherent field provides a gauge potential for the photon states. With a proper choice of the phase difference between the two coherent fields, the transport of a single photon can exhibit unity contrast in its transmissions for the two propagation directions.
Improved algorithms and coupled neutron-photon transport for auto-importance sampling method
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Jun-Li; Wu, Zhen; Qiu, Rui; Li, Chun-Yan; Liang, Man-Chun; Zhang, Hui; Gang, Zhi; Xu, Hong
2017-01-01
The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents improved algorithms for the AIS method, including particle transport, fictitious particle creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applied to complicated deep penetration problems with complex geometry and multiple materials. A Completely coupled Neutron-Photon Auto-Importance Sampling (CNP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of CNP-AIS, geometry splitting with Russian roulette and analog Monte Carlo, respectively. The calculation results of CNP-AIS are in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of CNP-AIS for both neutron and photon is much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo. Supported by the subject of National Science and Technology Major Project of China (2013ZX06002001-007, 2011ZX06004-007) and National Natural Science Foundation of China (11275110, 11375103)
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation
Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun
2015-01-01
The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695
bhlight: GENERAL RELATIVISTIC RADIATION MAGNETOHYDRODYNAMICS WITH MONTE CARLO TRANSPORT
Ryan, B. R.; Gammie, C. F.; Dolence, J. C.
2015-07-01
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.
Scoring methods for implicit Monte Carlo radiation transport
Edwards, A.L.
1981-01-01
Analytical and numerical tests were made of a number of possible methods for scoring the energy exchange between radiation and matter in the implicit Monte Carlo (IMC) radiation transport scheme of Fleck and Cummings. The interactions considered were effective absorption, elastic scattering, and Compton scattering. The scoring methods tested were limited to simple combinations of analogue, linear expected value, and exponential expected value scoring. Only two scoring methods were found that produced the same results as a pure analogue method. These are a combination of exponential expected value absorption and deposition and analogue Compton scattering of the particle, with either linear expected value Compton deposition or analogue Compton deposition. In both methods, the collision distance is based on the total scattering cross section.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport
Ryan, Benjamin R; Dolence, Joshua C.; Gammie, Charles F.
2015-06-25
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and tomore » a slowly accreting Kerr black hole in axisymmetry.« less
bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport
Ryan, Benjamin R; Dolence, Joshua C.; Gammie, Charles F.
2015-06-25
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.
Electron transport in magnetrons by a posteriori Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
A deterministic computational model for the two dimensional electron and photon transport
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Nealy, John E.
2014-12-01
A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles θ where -π/2<θ<π/2, and corresponding symmetry is assumed for the range of azimuth angles (0<φ<2π). In the transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.
Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
NASA Astrophysics Data System (ADS)
Malladi, Mayank
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are
Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles
NASA Astrophysics Data System (ADS)
Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Lobato, R.; Mosquera, J.; Pazos, A.; Pardo, J.; Pombar, M.; Rodríguez, A.; Sendón, J.
2004-11-01
A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm × 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data.
Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.
Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats
2015-09-01
For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter σ(α), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter.
Berg, Eric; Roncali, Emilie; Cherry, Simon R
2015-06-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution.
Landauer formulation of photon transport in driven systems
NASA Astrophysics Data System (ADS)
Wang, Chiao-Hsuan; Taylor, Jacob M.
2016-10-01
Understanding the behavior of light in nonequilibrium scenarios underpins much of quantum optics and optical physics. While lasers provide a severe example of a nonequilibrium problem, recent interests in the near-equilibrium physics of so-called photon gases, such as in Bose condensation of light or in attempts to make photonic quantum simulators, suggest one re-examine some near-equilibrium cases. Here we consider how a sinusoidal parametric coupling between two semi-infinite photonic transmission lines leads to the creation and flow of photons between the two lines. Our approach provides a photonic analog to the Landauer transport formula, and using nonequilbrium Green's functions, we can extend it to the case of an interacting region between two photonic leads where the sinusoid frequency plays the role of a voltage bias. Crucially, we identify both the mathematical framework and the physical regime in which photonic transport is directly analogous to electronic transport and regimes in which other behavior such as two-mode squeezing can emerge.
Parallel processing implementation for the coupled transport of photons and electrons using OpenMP
NASA Astrophysics Data System (ADS)
Doerner, Edgardo
2016-05-01
In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.
A Fano cavity test for Monte Carlo proton transport algorithms
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-15
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries.
Neutron contamination of Varian Clinac iX 10 MV photon beam using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Yani, S.; Tursinah, R.; Rhani, M. F.; Soh, R. C. X.; Haryanto, F.; Arif, I.
2016-03-01
High energy medical accelerators are commonly used in radiotherapy to increase the effectiveness of treatments. As we know neutrons can be emitted from a medical accelerator if there is an incident of X-ray that hits any of its materials. This issue becomes a point of view of many researchers. The neutron contamination has caused many problems such as image resolution and radiation protection for patients and radio oncologists. This study concerns the simulation of neutron contamination emitted from Varian Clinac iX 10 MV using Monte Carlo code system. As neutron production process is very complex, Monte Carlo simulation with MCNPX code system was carried out to study this contamination. The design of this medical accelerator was modelled based on the actual materials and geometry. The maximum energy of photons and neutron in the scoring plane was 10.5 and 2.239 MeV, respectively. The number and energy of the particles produced depend on the depth and distance from beam axis. From these results, it is pointed out that the neutron produced by linac 10 MV photon beam in a typical treatment is not negligible.
Monte Carlo-based energy response studies of diode dosimeters in radiotherapy photon beams.
Arun, C; Palani Selvam, T; Dinkar, Verma; Munshi, Prabhat; Kalra, Manjit Singh
2013-01-01
This study presents Monte Carlo-calculated absolute and normalized (relative to a (60)Co beam) sensitivity values of silicon diode dosimeters for a variety of commercially available silicon diode dosimeters for radiotherapy photon beams in the energy range of (60)Co-24 MV. These values were obtained at 5 cm depth along the central axis of a water-equivalent phantom of 10 cm × 10 cm field size. The Monte Carlo calculations were based on the EGSnrc code system. The diode dosimeters considered in the calculations have different buildup materials such as aluminum, brass, copper, and stainless steel + epoxy. The calculated normalized sensitivity values of the diode dosimeters were then compared to previously published measured values for photon beams at (60)Co-20 MV. The comparison showed reasonable agreement for some diode dosimeters and deviations of 5-17 % (17 % for the 3.4 mm brass buildup case for a 10 MV beam) for some diode dosimeters. Larger deviations of the measurements reflect that these models of the diode dosimeter were too simple. The effect of wall materials on the absorbed dose to the diode was studied and the results are presented. Spencer-Attix and Bragg-Gray stopping power ratios (SPRs) of water-to-diode were calculated at 5 cm depth in water. The Bragg-Gray SPRs of water-to-diode compare well with Spencer-Attix SPRs for ∆ = 100 keV and above at all beam qualities.
NASA Astrophysics Data System (ADS)
Yang, Bo; Qiu, Rui; Li, JunLi; Lu, Wei; Wu, Zhen; Li, Chunyan
2017-02-01
When a strong laser beam irradiates a solid target, a hot plasma is produced and high-energy electrons are usually generated (the so-called "hot electrons"). These energetic electrons subsequently generate hard X-rays in the solid target through the Bremsstrahlung process. To date, only limited studies have been conducted on this laser-induced radiological protection issue. In this study, extensive literature reviews on the physics and properties of hot electrons have been conducted. On the basis of these information, the photon dose generated by the interaction between hot electrons and a solid target was simulated with the Monte Carlo code FLUKA. With some reasonable assumptions, the calculated dose can be regarded as the upper boundary of the experimental results over the laser intensity ranging from 1019 to 1021 W/cm2. Furthermore, an equation to estimate the photon dose generated from ultraintense laser-solid interactions based on the normalized laser intensity is derived. The shielding effects of common materials including concrete and lead were also studied for the laser-driven X-ray source. The dose transmission curves and tenth-value layers (TVLs) in concrete and lead were calculated through Monte Carlo simulations. These results could be used to perform a preliminary and fast radiation safety assessment for the X-rays generated from ultraintense laser-solid interactions.
Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm.
Peterson, J. R.; Peng, E.; Ahmad, Z.; Bankert, J.; Grace, E.; Hannel, M.; Hodge, M.; Lorenz, S.; Lupu, A.; Meert, A.; Nagarajan, S.; Todd, N.; Winans, A.; Young, M.; Jernigan, J. G.; Kahn, S. M.; Rasmussen, A. P.; Chang, C.; Gilmore, D. K.; Claver, C.
2015-05-15
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s{sup −1}, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.
NASA Astrophysics Data System (ADS)
Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Rasmussen, A. P.; Peng, E.; Ahmad, Z.; Bankert, J.; Chang, C.; Claver, C.; Gilmore, D. K.; Grace, E.; Hannel, M.; Hodge, M.; Lorenz, S.; Lupu, A.; Meert, A.; Nagarajan, S.; Todd, N.; Winans, A.; Young, M.
2015-05-01
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s-1, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.
Robust light transport in non-Hermitian photonic lattices
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
Monte Carlo study of Quark Gluon Plasma using photon jet observables
NASA Astrophysics Data System (ADS)
Xing, Tian
2016-09-01
Relativistic heavy ion collisions create an exotic state of deconfined, nuclear matter called quark gluon plasma (QGP), providing an opportunity to study the strong interaction. In some particularly hard scattered events, a parton with high transverse momentum (pT) interacts with this medium before fragmenting into a spray of particles, called a jet. Jet properties of heavy ion collisions can be modified relative to expectations from pp collisions; this effect is called jet quenching. Measurement of the jet internal structure can provide information about this effect and about the medium itself. On the other hand, studying systems whose jets are recoiled against photons coming from an initial scattering offers a way to calibrate the momentum of the modified jet. Since photons do not carry color charge, they escape the QGP with their initial momentum intact. On this poster, results using the Monte Carlo event generators Pythia and JEWEL will be presented for fragmentation functions and jet suppression from photon-jet events, alongside experimental data from CMS and ATLAS at a center of mass energy of 2.76 TeV. Predictions are also presented for lead-lead collisions at a center of mass energy of 5.02 TeV.
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-09-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-09-08
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Currie, B E
2009-06-01
This paper presents the findings of an investigation into the Monte Carlo simulation of superficial cancer treatments of an internal canthus site using both kilovoltage photons and megavoltage electrons. The EGSnrc system of codes for the Monte Carlo simulation of the transport of electrons and photons through a phantom representative of either a water phantom or treatment site in a patient is utilised. Two clinical treatment units are simulated: the Varian Medical Systems Clinac 2100C accelerator for 6 MeV electron fields and the Pantak Therapax SXT 150 X-ray unit for 100 kVp photon fields. Depth dose, profile and isodose curves for these simulated units are compared against those measured by ion chamber in a PTW Freiburg MP3 water phantom. Good agreement was achieved away from the surface of the phantom between simulated and measured data. Dose distributions are determined for both kV photon and MeV electron fields in the internal canthus site containing lead and tungsten shielding, rapidly sloping surfaces and different density interfaces. There is a relatively high level of deposition of dose in tissue-bone and tissue-cartilage interfaces in the kV photon fields in contrast to the MeV electron fields. This is reflected in the maximum doses in the PTV of the internal canthus field being 12 Gy for kV photons and 4.8 Gy for MeV electrons. From the dose distributions, DVH and dose comparators are used to assess the simulated treatment fields. Any indication as to which modality is preferable to treat the internal canthus requires careful consideration of many different factors, this investigation provides further perspective in being able to assess which modality is appropriate.
Bishop, Martin J; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon "packets" as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct "humped" morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with "virtual-electrode" regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity.
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
Seif, F.; Bayatiani, M. R.
2015-01-01
Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its company’s instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (5×5 to 40×40 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 5×5 cm2 to 27% for 40×40 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409
A patient-specific Monte Carlo dose-calculation method for photon beams.
Wang, L; Chui, C S; Lovelock, M
1998-06-01
A patient-specific, CT-based, Monte Carlo dose-calculation method for photon beams has been developed to correctly account for inhomogeneity in the patient. The method employs the EGS4 system to sample the interaction of radiation in the medium. CT images are used to describe the patient geometry and to determine the density and atomic number in each voxel. The user code (MCPAT) provides the data describing the incident beams, and performs geometry checking and energy scoring in patient CT images. Several variance reduction techniques have been implemented to improve the computation efficiency. The method was verified with measured data and other calculations, both in homogeneous and inhomogeneous media. The method was also applied to a lung treatment, where significant differences in dose distributions, especially in the low-density region, were observed when compared with the results using an equivalent pathlength method. Comparison of the DVHs showed that the Monte Carlo calculated plan predicted an underdose of nearly 20% to the target, while the maximum doses to the cord and the heart were increased by 25% and 33%, respectively. These results suggested that the Monte Carlo method may have an impact on treatment designs, and also that it can be used as a benchmark to assess the accuracy of other dose calculation algorithms. The computation time for the lung case employing five 15-MV wedged beams, with an approximate field size of 13 X 13 cm and the dose grid size of 0.375 cm, was less than 14 h on a 175-MHz computer with a standard deviation of 1.5% in the high-dose region.
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Sinha, A.; Patni, H.K.; Dixit, B.M.; Painuly, N.K.; Singh, N.
2016-01-01
Background: Most preclinical studies are carried out on mice. For internal dose assessment of a mouse, specific absorbed fraction (SAF) values play an important role. In most studies, SAF values are estimated using older standard human organ compositions and values for limited source target pairs. Objective: SAF values for monoenergetic photons of energies 15, 50, 100, 500, 1000 and 4000 keV were evaluated for the Digimouse voxel phantom incorporated in Monte Carlo code FLUKA. The organ sources considered in this study were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal, eye and brain. The considered target organs were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal and brain. Eye was considered as a target organ only for eye as a source organ. Organ compositions and densities were adopted from International Commission on Radiological Protection (ICRP) publication number 110. Results: Evaluated organ masses and SAF values are presented in tabular form. It is observed that SAF values decrease with increasing the source-to-target distance. The SAF value for self-irradiation decreases with increasing photon energy. The SAF values are also found to be dependent on the mass of target in such a way that higher values are obtained for lower masses. The effect of composition is highest in case of target organ lungs where mass and estimated SAF values are found to have larger differences. Conclusion: These SAF values are very important for absorbed dose calculation for various organs of a mouse. PMID:28144589
Photon-Inhibited Topological Transport in Quantum Well Heterostructures.
Farrell, Aaron; Pereg-Barnea, T
2015-09-04
Here we provide a picture of transport in quantum well heterostructures with a periodic driving field in terms of a probabilistic occupation of the topologically protected edge states in the system. This is done by generalizing methods from the field of photon-assisted tunneling. We show that the time dependent field dresses the underlying Hamiltonian of the heterostructure and splits the system into sidebands. Each of these sidebands is occupied with a certain probability which depends on the drive frequency and strength. This leads to a reduction in the topological transport signatures of the system because of the probability to absorb or emit a photon. Therefore when the voltage is tuned to the bulk gap the conductance is smaller than the expected 2e(2)/h. We refer to this as photon-inhibited topological transport. Nevertheless, the edge modes reveal their topological origin in the robustness of the edge conductance to disorder and changes in model parameters. In this work the analogy with photon-assisted tunneling allows us to interpret the calculated conductivity and explain the sum rule observed by Kundu and Seradjeh.
SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
NASA Astrophysics Data System (ADS)
Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.
2014-03-01
Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.
Czarnecki, D; Voigts-Rhetz, P von; Shishechian, D Uchimura; Zink, K
2015-06-15
Purpose: Developing a fast and accurate calculation model to reconstruct the applied photon fluence from an external photon radiation therapy treatment based on an image recorded by an electronic portal image device (EPID). Methods: To reconstruct the initial photon fluence the 2D EPID image was corrected for scatter from the patient/phantom and EPID to generate the transmitted primary photon fluence. This was done by an iterative deconvolution using precalculated point spread functions (PSF). The transmitted primary photon fluence was then backprojected through the patient/phantom geometry considering linear attenuation to receive the initial photon fluence applied for the treatment.The calculation model was verified using Monte Carlo simulations performed with the EGSnrc code system. EPID images were produced by calculating the dose deposition in the EPID from a 6 MV photon beam irradiating a water phantom with air and bone inhomogeneities and the ICRP anthropomorphic voxel phantom. Results: The initial photon fluence was reconstructed using a single PSF and position dependent PSFs which depend on the radiological thickness of the irradiated object. Appling position dependent point spread functions the mean uncertainty of the reconstructed initial photon fluence could be reduced from 1.13 % to 0.13 %. Conclusion: This study presents a calculation model for fluence reconstruction from EPID images. The{sup Result} show a clear advantage when position dependent PSF are used for the iterative reconstruction. The basic work of a reconstruction method was established and further evaluations must be made in an experimental study.
3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations
NASA Astrophysics Data System (ADS)
Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.
2010-11-01
We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.
Sakota, Daisuke; Takatani, Setsuo
2010-01-01
A photon-cell interactive Monte Carlo (pciMC) that tracks photon migration in both the extra- and intracellular spaces is developed without using macroscopic scattering phase functions and anisotropy factors, as required for the conventional Monte Carlos (MCs). The interaction of photons at the plasma-cell boundary of randomly oriented 3-D biconcave red blood cells (RBCs) is modeled using the geometric optics. The pciMC incorporates different photon velocities from the extra- to intracellular space, whereas the conventional MC treats RBCs as points in the space with a constant velocity. In comparison to the experiments, the pciMC yielded the mean errors in photon migration time of 9.8±6.8 and 11.2±8.5% for suspensions of small and large RBCs (RBC(small), RBC(large)) averaged over the optically diffusing region from 2000 to 4000 μm, while the conventional random walk Monte Carlo simulation gave statistically higher mean errors of 19.0±5.8 ( p < 0.047) and 21.7±19.1% (p < 0.055), respectively. The gradients of optical density in the diffusing region yielded statistically insignificant differences between the pciMC and experiments with the mean errors between them being 1.4 and 0.9% in RBC(small) and RBC(larger), respectively. The pciMC based on the geometric optics can be used to accurately predict photon migration in the optically diffusing, turbid medium.
Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.
1993-04-01
Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Kernel density estimator methods for Monte Carlo radiation transport
NASA Astrophysics Data System (ADS)
Banerjee, Kaushik
In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that
Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
NASA Astrophysics Data System (ADS)
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
Monte Carlo modelling of positron transport in real world applications
NASA Astrophysics Data System (ADS)
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
LDRD project 151362 : low energy electron-photon transport.
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
Kanick, S C; Robinson, D J; Sterenborg, H J C M; Amelink, A
2009-11-21
Single fiber reflectance spectroscopy is a method to noninvasively quantitate tissue absorption and scattering properties. This study utilizes a Monte Carlo (MC) model to investigate the effect that optical properties have on the propagation of photons that are collected during the single fiber reflectance measurement. MC model estimates of the single fiber photon path length (L(SF)) show excellent agreement with experimental measurements and predictions of a mathematical model over a wide range of optical properties and fiber diameters. Simulation results show that L(SF) is unaffected by changes in anisotropy (g epsilon [0.8, 0.9, 0.95]), but is sensitive to changes in phase function (Henyey-Greenstein versus modified Henyey-Greenstein). A 20% decrease in L(SF) was observed for the modified Henyey-Greenstein compared with the Henyey-Greenstein phase function; an effect that is independent of optical properties and fiber diameter and is approximated with a simple linear offset. The MC model also returns depth-resolved absorption profiles that are used to estimate the mean sampling depth (Z(SF)) of the single fiber reflectance measurement. Simulated data are used to define a novel mathematical expression for Z(SF) that is expressed in terms of optical properties, fiber diameter and L(SF). The model of sampling depth indicates that the single fiber reflectance measurement is dominated by shallow scattering events, even for large fibers; a result that suggests that the utility of single fiber reflectance measurements of tissue in vivo will be in the quantification of the optical properties of superficial tissues.
Koger, B; Kirkby, C
2016-12-02
As a recent area of development in radiation therapy, gold nanoparticle (GNP) enhanced radiation therapy has shown potential to increase tumour dose while maintaining acceptable levels of healthy tissue toxicity. In this study, the effect of varying photon beam energy in GNP enhanced arc radiation therapy (GEART) is quantified through the introduction of a dose scoring metric, and GEART is compared to a conventional radiotherapy treatment. The PENELOPE Monte Carlo code was used to model several simple phantoms consisting of a spherical tumour containing GNPs (concentration: 15 mg Au g(-1) tumour, 0.8 mg Au g(-1) normal tissue) in a cylinder of tissue. Several monoenergetic photon beams, with energies ranging from 20 keV to 6 MeV, as well as 100, 200, and 300 kVp spectral beams, were used to irradiate the tumour in a 360° arc treatment. A dose metric was then used to compare tumour and tissue doses from GEART treatments to a similar treatment from a 6 MV spectrum. This was also performed on a simulated brain tumour using patient computed tomography data. GEART treatments showed potential over the 6 MV treatment for many of the simulated geometries, delivering up to 88% higher mean dose to the tumour for a constant tissue dose, with the effect greatest near a source energy of 50 keV. This effect is also seen with the inclusion of bone in a brain treatment, with a 14% increase in mean tumour dose over 6 MV, while still maintaining acceptable levels of dose to the bone and brain.
NASA Astrophysics Data System (ADS)
Koger, B.; Kirkby, C.
2016-12-01
As a recent area of development in radiation therapy, gold nanoparticle (GNP) enhanced radiation therapy has shown potential to increase tumour dose while maintaining acceptable levels of healthy tissue toxicity. In this study, the effect of varying photon beam energy in GNP enhanced arc radiation therapy (GEART) is quantified through the introduction of a dose scoring metric, and GEART is compared to a conventional radiotherapy treatment. The PENELOPE Monte Carlo code was used to model several simple phantoms consisting of a spherical tumour containing GNPs (concentration: 15 mg Au g-1 tumour, 0.8 mg Au g-1 normal tissue) in a cylinder of tissue. Several monoenergetic photon beams, with energies ranging from 20 keV to 6 MeV, as well as 100, 200, and 300 kVp spectral beams, were used to irradiate the tumour in a 360° arc treatment. A dose metric was then used to compare tumour and tissue doses from GEART treatments to a similar treatment from a 6 MV spectrum. This was also performed on a simulated brain tumour using patient computed tomography data. GEART treatments showed potential over the 6 MV treatment for many of the simulated geometries, delivering up to 88% higher mean dose to the tumour for a constant tissue dose, with the effect greatest near a source energy of 50 keV. This effect is also seen with the inclusion of bone in a brain treatment, with a 14% increase in mean tumour dose over 6 MV, while still maintaining acceptable levels of dose to the bone and brain.
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Iwano, Takayuki; Umeyama, Shinji
2015-12-01
fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
NASA Astrophysics Data System (ADS)
Sarria, D.; Blelly, P.-L.; Forme, F.
2015-05-01
Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.
Filippone, W.L.; Baker, R.S.
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors
NASA Astrophysics Data System (ADS)
Wu, Dufan; Xu, Xiaofei; Zhang, Li; Wang, Sen
2016-09-01
In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.
A Monte Carlo model for out-of-field dose calculation from high-energy photon therapy.
Kry, Stephen F; Titt, Uwe; Followill, David; Pönisch, Falk; Vassiliev, Oleg N; White, R Allen; Stovall, Marilyn; Salehpour, Mohammad
2007-09-01
As cancer therapy becomes more efficacious and patients survive longer, the potential for late effects increases, including effects induced by radiation dose delivered away from the treatment site. This out-of-field radiation is of particular concern with high-energy radiotherapy, as neutrons are produced in the accelerator head. We recently developed an accurate Monte Carlo model of a Varian 2100 accelerator using MCNPX for calculating the dose away from the treatment field resulting from low-energy therapy. In this study, we expanded and validated our Monte Carlo model for high-energy (18 MV) photon therapy, including both photons and neutrons. Simulated out-of-field photon doses were compared with measurements made with thermoluminescent dosimeters in an acrylic phantom up to 55 cm from the central axis. Simulated neutron fluences and energy spectra were compared with measurements using moderated gold foil activation in moderators and data from the literature. The average local difference between the calculated and measured photon dose was 17%, including doses as low as 0.01% of the central axis dose. The out-of-field photon dose varied substantially with field size and distance from the edge of the field but varied little with depth in the phantom, except at depths shallower than 3 cm, where the dose sharply increased. On average, the difference between the simulated and measured neutron fluences was 19% and good agreement was observed with the neutron spectra. The neutron dose equivalent varied little with field size or distance from the central axis but decreased with depth in the phantom. Neutrons were the dominant component of the out-of-field dose equivalent for shallow depths and large distances from the edge of the treatment field. This Monte Carlo model is useful to both physicists and clinicians when evaluating out-of-field doses and associated potential risks.
Quantum Zeno switch for single-photon coherent transport
Zhou Lan; Yang, S.; Liu Yuxi; Sun, C. P.; Nori, Franco
2009-12-15
Using a dynamical quantum Zeno effect, we propose a general approach to control the coupling between a two-level system (TLS) and its surroundings, by modulating the energy-level spacing of the TLS with a high-frequency signal. We show that the TLS-surroundings interaction can be turned off when the ratio between the amplitude and the frequency of the modulating field is adjusted to be a zero of a Bessel function. The quantum Zeno effect of the TLS can also be observed by the vanishing of the photon reflection at these zeros. Based on these results, we propose a quantum switch to control the transport of a single photon in a one-dimensional waveguide. Our analytical results agree well with numerical results using Floquet theory.
Control of photon transport properties in nanocomposite nanowires
NASA Astrophysics Data System (ADS)
Moffa, M.; Fasano, V.; Camposeo, A.; Persano, L.; Pisignano, D.
2016-02-01
Active nanowires and nanofibers can be realized by the electric-field induced stretching of polymer solutions with sufficient molecular entanglements. The resulting nanomaterials are attracting an increasing attention in view of their application in a wide variety of fields, including optoelectronics, photonics, energy harvesting, nanoelectronics, and microelectromechanical systems. Realizing nanocomposite nanofibers is especially interesting in this respect. In particular, methods suitable for embedding inorganic nanocrystals in electrified jets and then in active fiber systems allow for controlling light-scattering and refractive index properties in the realized fibrous materials. We here report on the design, realization, and morphological and spectroscopic characterization of new species of active, composite nanowires and nanofibers for nanophotonics. We focus on the properties of light-confinement and photon transport along the nanowire longitudinal axis, and on how these depend on nanoparticle incorporation. Optical losses mechanisms and their influence on device design and performances are also presented and discussed.
NASA Astrophysics Data System (ADS)
Schneider, A. M.; Flanner, M.; Yang, P.; Yi, B.; Huang, X.; Feldman, D.
2015-12-01
The spectral albedo of a snow-covered surface is sensitive to effective snow grain size. Snow metamorphism, then, affects the strength of surface albedo feedback and changes the radiative energy budget of the planet. The Near-Infrared Emitting Reflectance Dome (NERD) is an instrument in development designed to measure snow effective radius from in situ bidirectional reflectance factors (BRFs) by illuminating a surface with nadir positioned light emitting diodes centered around 1.30 and 1.55 microns. Better understanding the dependences of BRFs on snow grain shape and size is imperative to constraining measurements taken by the NERD. Here, we use the Monte Carlo method for photon transport to explore BRFs of snow surfaces of different shapes and sizes. In addition to assuming spherical grains and using Mie theory, we incorporate into the model the scattering phase functions and other single scattering properties of the following nine aspherical grain shapes: hexagonal columns, plates, hollow columns, droxtals, hollow bullet rosettes, solid bullet rosettes, 8-element column aggregates, 5-element plate aggregates, and 10-element plate aggregates. We present the simulated BRFs of homogeneous snow surfaces for these ten shape habits and show their spectral variability for a wide range of effective radii. Initial findings using Mie theory indicate that surfaces of spherical particles exhibit rather Lambertian reflectance for the two incident wavelengths used in the NERD and show a monotonically decreasing trend in black-sky albedo with increasing effective radius. These results are consistent with previous studies and also demonstrate good agreement with models using the two-stream approximation.
Evaluation of a 50-MV Photon Therapy Beam from a Racetrack Microtron Using MCNP4B Monte Carlo Code
NASA Astrophysics Data System (ADS)
Gudowska, I.; Sorcini, B.; Svensson, R.
High energy photon therapy beam from the 50 MV racetrack microtron has been evaluated using the Monte Carlo code MCNP4B. The spatial and energy distribution of photons, radial and depth dose distributions in the phantom are calculated for the stationary and scanned photon beams from different targets. The calculated dose distributions are compared to the experimental data using a silicon diode detector. Measured and calculated depth-dose distributions are in fairly good agreement, within 2-3% for the positions in the range 2-30 cm in the phantom, whereas the larger discrepancies up to 10% are observed in the dose build-up region. For the stationary beams the differences in the calculated and measured radial dose distributions axe about 2-10%.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Muhammad, Wazir; Lee, Sang Hoon
2013-01-01
Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. A(Z, v(i)²)) over squared momentum transfer (v(i)² = v(1)²,......, v(59)²). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of A(Z, v(i)²) and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the A(Z, v(i)²) data tables.
Wright, Tracy; Lye, Jessica E; Ramanathan, Ganesan; Harty, Peter D; Oliver, Chris; Webb, David V; Butler, Duncan J
2015-01-21
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a (60)Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method.
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation
Verhaegen, Frank
2002-05-21
High atomic number (Z) heterogeneities in tissue exposed to photons with energies of up to about 1 MeV can cause significant dose perturbations in their immediate vicinity. The recently released Monte Carlo (MC) code EGSnrc (Kawrakow 2000a Med. Phys. 27 485-98) was used to investigate the dose perturbation of high-Z heterogeneities in tissue in kilovolt (kV) and 60Co photon beams. Simulations were performed of measurements with a dedicated thin-window parallel-plate ion chamber near a high-Z interface in a 60Co photon beam (Nilsson et al 1992 Med. Phys. 19 1413-21). Good agreement was obtained between simulations and measurements for a detailed set of experiments in which the thickness of the ion chamber window, the thickness of the air gap between ion chamber and heterogeneity, the depth of the ion chamber in polystyrene and the material of the interface was varied. The EGSnrc code offers several improvements in the electron and photon production and transport algorithms over the older EGS4/PRESTA code (Nelson et al 1985 Stanford Linear Accelerator Center Report SLAC-265. Bielajew and Rogers 1987 Nucl. Instrum. Methods Phys. Res. B 18 165-81). The influence of the new EGSnrc features was investigated for simulations of a planar slab of a high-Z medium embedded in water and exposed to kV or 60Co photons. It was found that using the new electron transport algorithm in EGSnrc, including relativistic spin effects in elastic scattering, significantly affects the calculation of dose distribution near high-Z interfaces. The simulations were found to be independent of the maximum fractional electron energy loss per step (ESTEPE), which was often a cause for concern in older EGS4 simulations. Concerning the new features of the photon transport algorithm sampling of the photoelectron angular distribution was found to have a significant effect, whereas the effect of binding energies in Compton scatter was found to be negligible. A slight dose artefact very close to high
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Rising, Michael Evan
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Time series analysis of Monte Carlo neutron transport calculations
NASA Astrophysics Data System (ADS)
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study
Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon
2012-03-15
Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient
Szoke, A; Brooks, E D; McKinley, M; Daffin, F
2005-03-30
The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media thermal emission is almost completely balanced by strong absorption. In a previous publication [SB03], the photon transport equation was written in terms of the deviation of the specific intensity from the local equilibrium field. We called the new form of the equations the difference formulation. The difference formulation is rigorously equivalent to the original transport equation. It is particularly advantageous in thick media, where the radiation field approaches local equilibrium and the deviations from the Planck distribution are small. The difference formulation for photon transport also clarifies the diffusion limit. In this paper, the transport equation is solved by the Symbolic Implicit Monte Carlo (SIMC) method and a comparison is made between the standard formulation and the difference formulation. The SIMC method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.
NASA Astrophysics Data System (ADS)
Socha, John Bronn
The first part of this thesis contains a historical perspective on the last five years of research in hot-electron transport in semiconductors. This perspective serves two purposes. First, it provides a motivation for the second part of this thesis, which deals with calculating the full velocity distribution function of hot electrons. And second, it points out many of the unsolved theoretical problems that might be solved with the techniques developed in the second part. The second part of this thesis contains a derivation of a new method for calculating velocity distribution functions. This method, the Monte Carlo trajectory integral, is well suited for calculating the time evolution of a distribution function in the presence of complicated scattering mechanisms, like scattering with acoustic and optical phonons, inter-valley scattering, Bragg reflections, and even electron-electron scattering. This method uses many of the techniques develped for Monte Carlo transport calculations, but unlike other Monte Carlo methods, the Monte Carlo trajectory integral has very good control over the variance of the calculated distribution function across the entire distribution function. Since the Monte Carlo trajectory integral only needs information on the distribution function at previous times, it is well suited to electron-electron scattering where the distribution function must be known before the scattering rate can be calculated. Finally, this thesis ends with an application of the Monte Carlo trajectory integral to electron transport in SiO(,2) in the presence of electric fields up to 12 MV/cm, and it includes a number of suggestions for applying the Monte Carlo trajectory integral to other experiments in both SiO(,2) and GaAs. The Monte Carlo trajectory integral should be of special interest when super-computers are more common since then there will be the computing resources to include electron-electron scattering. The high-field distribution functions calculated when
NASA Astrophysics Data System (ADS)
Chow, James C. L.
2012-10-01
This study investigated radiation dose variations in pre-clinical irradiation due to the photon beam energy and presence of tissue heterogeneity. Based on the same mouse computed tomography image dataset, three phantoms namely, heterogeneous, homogeneous and bone homogeneous were used. These phantoms were generated by overriding the relative electron density of no voxel (heterogeneous), all voxel (homogeneous) and the bone voxel (bone homogeneous) to one. 360° photon arcs with beam energies of 50 - 1250 keV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. Monte Carlo simulations were carried out in parallel using multiple nodes in a high-performance computing cluster. It was found that the dose conformity increased with the increase of the photon beam energy from the keV to MeV range. For the heterogeneous mouse phantom, increasing the photon beam energy from 50 keV to 1250 keV increased seven times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 2.7 times higher when the bone heterogeneity was not neglected using the 50 keV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose was found in the photon beams with energy range of 50 - 200 keV and the dose enhancement decreased with an increase of the beam energy. Moreover, the MeV photon beam had a higher dose at the isocenter, and a better dose conformity compared to the keV beam.
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-01-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs
NASA Astrophysics Data System (ADS)
Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca
2008-09-01
In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.
Hart, Vern P; Doyle, Timothy E
2013-09-01
A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
NASA Astrophysics Data System (ADS)
Kim, Don-Soo
Dose measurements and radiation transport calculations were investigated for the interactions within the human brain of fast neutrons, slow neutrons, thermal neutrons, and photons associated with accelerator-based boron neutron capture therapy (ABNCT). To estimate the overall dose to the human brain, it is necessary to distinguish the doses from the different radiation sources. Using organic scintillators, human head phantom and detector assemblies were designed, constructed, and tested to determine the most appropriate dose estimation system to discriminate dose due to the different radiation sources that will ultimately be incorporated into a human head phantom to be used for dose measurements in ABNCT. Monoenergetic and continuous energy neutrons were generated via the 7Li(p,n)7Be reaction in a metallic lithium target near the reaction threshold using the 5.5 MV Van de Graaff accelerator at the University of Massachusetts Lowell. A human head phantom was built to measure and to distinguish the doses which result from proton recoils induced by fast neutrons, alpha particles and recoil lithium nuclei from the 10B(n,alpha)7Li reaction, and photons generated in the 7Li accelerator target as well as those generated inside the head phantom through various nuclear reactions at the same time during neutron irradiation procedures. The phantom consists of two main parts to estimate dose to tumor and dose to healthy tissue as well: a 3.22 cm3 boron loaded plastic scintillator which simulates a boron containing tumor inside the brain and a 2664 cm3 cylindrical liquid scintillator which represents the surrounding healthy tissue in the head. The Monte Carlo code MCNPX(TM) was used for the simulation of radiation transport due to neutrons and photons and extended to investigate the effects of neutrons and other radiation on the brain at various depths.
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to
Single photon transport by a moving atom through sub-wavelength hole
NASA Astrophysics Data System (ADS)
Afanasiev, A. E.; Melentiev, P. N.; Kuzin, A. A.; Kalatskiy, A. Yu.; Balykin, V. I.
2016-12-01
The results of investigation of photon transport through the subwavelength hole in the opaque screen by using single neutral atom are represented. The basis of the proposed and implemented method is the absorption of a photon by a neutral atom immediately before the subwavelength aperture, traveling of the atoms through the hole and emission of a photon on the other side of the screen. Realized method is the alternative approach to existing for photon transport through a subwavelength aperture: 1) self-sustained transmittance of a photon through the aperture according to the Bethe's model; 2) extra ordinary transmission because of surface-plasmon excitation.
Light transport and lasing in complex photonic structures
NASA Astrophysics Data System (ADS)
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Rauf Abdullah, Nzar; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2016-09-21
We investigate theoretically the balance of the static magnetic and the dynamical photon forces in the electron transport through a quantum dot in a photon cavity with a single photon mode. The quantum dot system is connected to external leads and the total system is exposed to a static perpendicular magnetic field. We explore the transport characteristics through the system by tuning the ratio, [Formula: see text], between the photon energy, [Formula: see text], and the cyclotron energy, [Formula: see text]. Enhancement in the electron transport with increasing electron-photon coupling is observed when [Formula: see text]. In this case the photon field dominates and stretches the electron charge distribution in the quantum dot, extending it towards the contact area for the leads. Suppression in the electron transport is found when [Formula: see text], as the external magnetic field causes circular confinement of the charge density around the dot.
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.
Exact dissipation model for arbitrary photonic Fock state transport in waveguide QED systems.
Chen, Zihao; Zhou, Yao; Shen, Jung-Tsung
2017-02-15
We present an exact dissipation model for correlated photon transport in waveguide QED systems. This model rigorously incorporates the infinitely many degrees of freedom of the full three-dimensional photonic scattering channels in the non-excitable ambient environment. We show that the photon leakages to the scattering channels can be accounted for by a reduced Hamiltonian and a restricted eigen-state, with a resultant atomic dissipation. This model is valid for arbitrary photonic Fock and coherent states.
Gorshkov, Anton V; Kirillin, Mikhail Yu
2015-08-01
Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code
Zheng, S.H.; Vergnaud, T.; Nimal, J.C.
1998-03-01
Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.
Zheng, Xiao J; Chow, James C L
2017-01-01
AIM To investigated the dose enhancement due to the incorporation of nanoparticles in skin therapy using the kilovoltage (kV) photon and megavoltage (MV) electron beams. Monte Carlo simulations were used to predict the dose enhancement when different types and concentrations of nanoparticles were added to skin target layers of varying thickness. METHODS Clinical kV photon beams (105 and 220 kVp) and MV electron beams (4 and 6 MeV), produced by a Gulmay D3225 orthovoltage unit and a Varian 21 EX linear accelerator, were simulated using the EGSnrc Monte Carlo code. Doses at skin target layers with thicknesses ranging from 0.5 to 5 mm for the photon beams and 0.5 to 10 mm for the electron beams were determined. The skin target layer was added with the Au, Pt, I, Ag and Fe2O3 nanoparticles with concentrations ranging from 3 to 40 mg/mL. The dose enhancement ratio (DER), defined as the dose at the target layer with nanoparticle addition divided by the dose at the layer without nanoparticle addition, was calculated for each nanoparticle type, nanoparticle concentration and target layer thickness. RESULTS It was found that among all nanoparticles, Au had the highest DER (5.2-6.3) when irradiated with kV photon beams. Dependence of the DER on the target layer thickness was not significant for the 220 kVp photon beam but it was for 105 kVp beam for Au nanoparticle concentrations higher than 18 mg/mL. For other nanoparticles, the DER was dependent on the atomic number of the nanoparticle and energy spectrum of the photon beams. All nanoparticles showed an increase of DER with nanoparticle concentration during the photon beam irradiations regardless of thickness. For electron beams, the Au nanoparticles were found to have the highest DER (1.01-1.08) when the beam energy was equal to 4 MeV, but this was drastically lower than the DER values found using photon beams. The DER was also found affected by the depth of maximum dose of the electron beam and target thickness. For
1991-03-01
Monti Captain# USAF AFIT.’GNE/F•P/91M-6 (LO IA Approved for public release; distribution unlimited AFIT/IGNE/ENP/91M-6 HIGH ALTITUDE NEUTRAL... distribution unlimited Preface The purpose of this study was to perform Monte Carlo simulations of neutral particle transport with primary and secondary...21 4. Spatial Cell Geometry for Co-Altitude Detectors .................... .................. 44 5. MCNP vs. SMAUG Neutron Fluence at Source Co
Yamanaka, M; Takashina, M; Kurosu, K; Koizumi, M; Moskvin, V; Das, I
2015-06-15
Purpose: In this study we present Monte Carlo based evaluation of the shielding effect for secondary neutrons from patient collimator, and secondary photons emitted in the process of neutron shielding by combination of moderator and boron-10 placed around patient collimator. Methods: The PHITS Monte Carlo Simulation radiation transport code was used to simulate the proton beam (Ep = 64 to 93 MeV) from a proton therapy facility. In this study, moderators (water, polyethylene and paraffin) and boron (pure {sup 10}B) were placed around patient collimator in this order. The rate of moderator and boron thicknesses was changed fixing the total thickness at 3cm. The secondary neutron and photons doses were evaluated as the ambient dose equivalent per absorbed dose [H*(10)/D]. Results: The secondary neutrons are shielded more effectively by combination moderators and boron. The most effective combination of shielding neutrons is the polyethylene of 2.4 cm thick and the boron of 0.6 cm thick and the maximum reduction rate is 47.3 %. The H*(10)/D of secondary photons in the control case is less than that of neutrons by two orders of magnitude and the maximum increase of secondary photons is 1.0 µSv/Gy with the polyethylene of 2.8 cm thick and the boron of 0.2 cm thick. Conclusion: The combination of moderators and boron is beneficial for shielding secondary neutrons. Both the secondary photons of control and those emitted in the shielding neutrons are very lower than the secondary neutrons and photon has low RBE in comparison with neutron. Therefore the secondary photons can be ignored in the shielding neutrons.This work was supported by JSPS Core-to-Core Program (No.23003). This work was supported by JSPS Core-to-Core Program (No.23003)
Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
NASA Astrophysics Data System (ADS)
Bošnjaković, D.; Petrović, Z. Lj; White, R. D.; Dujko, S.
2014-10-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C2H2F4, iso-C4H10 and SF6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C2H2F4/iso-C4H10/SF6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
Wang, Lilie L. W.; Klein, David; Beddar, A. Sam
2010-10-15
Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer
Palma, Bianey Atriana; Sánchez, Ana Ureba; Salguero, Francisco Javier; Arráns, Rafael; Sánchez, Carlos Míguez; Zurita, Amadeo Walls; Hermida, María Isabel Romero; Leal, Antonio
2012-03-07
The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (D(p)). Heart and ipsilateral lung receiving 5% D(p) and 15% D(p), respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% D(p) and 100% D(p) was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.
Keall, Paul J; Siebers, Jeffrey V; Libby, Bruce; Mohan, Radhe
2003-04-01
An accurate dose calculation in phantom and patient geometries requires an accurate description of the radiation source. Errors in the radiation source description are propagated through the dose calculation. With the emergence of linear accelerators whose dosimetric characteristics are similar to within measurement uncertainty, the same radiation source description can be used as the input to dose calculation for treatment planning at many institutions with the same linear accelerator model. Our goal in the current research was to determine the initial electron fluence above the linear accelerator target for such an accelerator to allow a dose calculation in water to within 1% or 1 mm of the measured data supplied by the manufacturer. The method used for both the radiation source description and the patient transport was Monte Carlo. The linac geometry was input into the Monte Carlo code using the accelerator's manufacturer's specifications. Assumptions about the initial electron source above the target were made based on previous studies. The free parameters derived for the calculations were the mean energy and radial Gaussian width of the initial electron fluence and the target density. A combination of the free parameters yielded an initial electron fluence that, when transported through the linear accelerator and into the phantom, allowed a dose-calculation agreement to the experimental ion chamber data to within the specified criteria at both 6 and 18 MV nominal beam energies, except near the surface, particularly for the 18 MV beam. To save time during Monte Carlo treatment planning, the initial electron fluence was transported through part of the treatment head to a plane between the monitor chambers and the jaws and saved as phase-space files. These files are used for clinical Monte Carlo-based treatment planning and are freely available from the authors.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
Cavity-photon-switched coherent transient transport in a double quantum waveguide
Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei
2014-12-21
We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
NASA Astrophysics Data System (ADS)
Kahraman, A.; Kaya, S.; Jaksic, A.; Yilmaz, E.
2015-05-01
Radiation-sensing Field Effect Transistors (RadFETs or MOSFET dosimeters) with SiO2 gate dielectric have found applications in space, radiotherapy clinics, and high-energy physics laboratories. More sensitive RadFETs, which require modifications in device design, including gate dielectric, are being considered for personal dosimetry applications. This paper presents results of a detailed study of the RadFET energy response simulated with PENELOPE Monte Carlo code. Alternative materials to SiO2 were investigated to develop high-efficiency new radiation sensors. Namely, in addition to SiO2, Al2O3 and HfO2 were simulated as gate material and deposited energy amounts in these layers were determined for photon irradiation with energies between 20 keV and 5 MeV. The simulations were performed for capped and uncapped configurations of devices irradiated by point and extended sources, the surface area of which is the same with that of the RadFETs. Energy distributions of transmitted and backscattered photons were estimated using impact detectors to provide information about particle fluxes within the geometrical structures. The absorbed energy values in the RadFETs material zones were recorded. For photons with low and medium energies, the physical processes that affect the absorbed energy values in different gate materials are discussed on the basis of modelling results. The results show that HfO2 is the most promising of the simulated gate materials.
NASA Astrophysics Data System (ADS)
Zhang, Hai-Feng; Liu, Shao-Bin
2016-08-01
In this paper, the properties of photonic band gaps (PBGs) in two types of two-dimensional plasma-dielectric photonic crystals (2D PPCs) under a transverse-magnetic (TM) wave are theoretically investigated by a modified plane wave expansion (PWE) method where Monte Carlo method is introduced. The proposed PWE method can be used to calculate the band structures of 2D PPCs which possess arbitrary-shaped filler and any lattice. The efficiency and convergence of the present method are discussed by a numerical example. The configuration of 2D PPCs is the square lattices with fractal Sierpinski gasket structure whose constituents are homogeneous and isotropic. The type-1 PPCs is filled with the dielectric cylinders in the plasma background, while its complementary structure is called type-2 PPCs, in which plasma cylinders behave as the fillers in the dielectric background. The calculated results reveal that the enough accuracy and good convergence can be obtained, if the number of random sampling points of Monte Carlo method is large enough. The band structures of two types of PPCs with different fractal orders of Sierpinski gasket structure also are theoretically computed for a comparison. It is demonstrate that the PBGs in higher frequency region are more easily produced in the type-1 PPCs rather than in the type-2 PPCs. Sierpinski gasket structure introduced in the 2D PPCs leads to a larger cutoff frequency, enhances and induces more PBGs in high frequency region. The effects of configurational parameters of two types of PPCs on the PBGs are also investigated in detail. The results show that the PBGs of the PPCs can be easily manipulated by tuning those parameters. The present type-1 PPCs are more suitable to design the tunable compacted devices.
Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O'Brien, M J; Beck, B R; Hagmann, C A
2005-06-06
An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Yang, Lina; Minnich, Austin J.
2017-03-01
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation
Yang, Lina; Minnich, Austin J.
2017-01-01
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials. PMID:28290484
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.
Yang, Lina; Minnich, Austin J
2017-03-14
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome
2016-05-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.
Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome
2016-01-01
We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
NASA Astrophysics Data System (ADS)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Effects of magnetic field on photon-induced quantum transport in a single dot-cavity system
NASA Astrophysics Data System (ADS)
Abdullah, Nzar Rauf; Fatah, Aziz H.; Fatah, Jabar M. A.
2016-11-01
In this study, we show how a static magnetic field can control photon-induced electron transport through a quantum dot system coupled to a photon cavity. The quantum dot system is connected to two electron reservoirs and exposed to an external perpendicular static magnetic field. The propagation of electrons through the system is thus influenced by the static magnetic and the dynamic photon fields. It is observed that the photon cavity forms photon replica states controlling electron transport in the system. If the photon field has more energy than the cyclotron energy, then the photon field is dominant in the electron transport. Consequently, the electron transport is enhanced due to activation of photon replica states. By contrast, the electron transport is suppressed in the system when the photon energy is smaller than the cyclotron energy.
Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie
2011-01-15
Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
González, Oswaldo; Rodríguez, Silvestre; Pérez-Jiménez, Rafael; Mendoza, Beatriz R; Ayala, Alejandro
2011-01-31
We present a comparison between the modified Monte Carlo algorithm (MMCA) and a recently proposed ray-tracing algorithm named as photon-tracing algorithm. Both methods are compared exhaustively according to error analysis and computational costs. We show that the new photon-tracing method offers a solution with a slightly greater error but requiring from considerable less computing time. Moreover, from a practical point of view, the solutions obtained with both algorithms are approximately equivalent, demonstrating the goodness of the new photon-tracing method.
New Capabilities in Mercury: A Modern, Monte Carlo Particle Transport Code
Procassini, R J; Cullen, D E; Greenman, G M; Hagmann, C A; Kramer, K J; McKinley, M S; O'Brien, M J; Taylor, J M
2007-03-08
The new physics, algorithmic and computer science capabilities of the Mercury general-purpose Monte Carlo particle transport code are discussed. The new physics and algorithmic features include in-line energy deposition and isotopic depletion, significant enhancements to the tally and source capabilities, diagnostic ray-traced particles, support for multi-region hybrid (mesh and combinatorial geometry) systems, and a probability of initiation method. Computer science enhancements include a second method of dynamically load-balancing parallel calculations, improved methods for visualizing 3-D combinatorial geometries and initial implementation of an in-line visualization capabilities.
NASA Astrophysics Data System (ADS)
Péron, A.; Malouch, F.; Zoia, A.; Diop, C. M.
2014-06-01
Nuclear heating evaluation by Monte-Carlo simulation requires coupled neutron-photon calculation so as to take into account the contribution of secondary photons. Nuclear data are essential for a good calculation of neutron and photon energy deposition and for secondary photon generation. However, a number of isotopes of the most common nuclear data libraries happen to be affected by energy and/or momentum conservation errors concerning the photon production or inaccurate thresholds for photon emission sections. In this paper, we perform a comprehensive survey of the three evaluations JEFF3.1.1, JEFF3.2T2 (beta version) and ENDF/B-VII.1, over 142 isotopes. The aim of this survey is, on the one hand, to check the existence of photon production data by neutron reaction and, on the other hand, to verify the consistency of these data using the kinematic limits method recently implemented in the TRIPOLI-4 Monte-Carlo code, developed by CEA (Saclay center). Then, the impact of these inconsistencies affecting energy deposition scores has been estimated for two materials using a specific nuclear heating calculation scheme in the context of the OSIRIS Material Testing Reactor (CEA/Saclay).
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE.
NASA Astrophysics Data System (ADS)
Yamaguchi, Mitsutaka; Nagao, Yuto; Satoh, Takahiro; Sugai, Hiroyuki; Sakai, Makoto; Arakawa, Kazuo; Kawachi, Naoki
2017-01-01
The purpose of this study is to determine whether the main component of the low-energy (63-68 keV) particles emitted perpendicularly to the 12C beam from the 12C-irradiated region in a water phantom is secondary electron bremsstrahlung (SEB). Monte Carlo simulations of a 12C-beam (290 MeV/u) irradiated on a water phantom were performed. A detector was placed beside the water phantom with a lead collimator between the phantom and the detector. To move the Bragg-peak position, a binary filter was placed in an upper stream of the phantom. The energy distributions of the particles incident on the detector and those deposited in the detector were analyzed. The simulation was also performed with suppressed delta-ray and/or bremsstrahlung generation to identify the SEB components. It was found that the particles incident on the detector were predominantly photons and neutrons. The yields of the photons and energy deposition decreased with the suppression of SEB generation. It is concluded that one of the predominant components of the yields in the regions shallower than the Bragg-peak position is due to SEB generation, and these components become significantly smaller in regions deeper than the Bragg-peak position.
Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.
2008-07-15
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
NASA Astrophysics Data System (ADS)
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-07-01
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-07-21
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm(2), 4 × 14.5 cm(2) or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department
Comparison of generalized transport and Monte-Carlo models of the escape of a minor species
NASA Technical Reports Server (NTRS)
Demars, H. G.; Barakat, A. R.; Schunk, R. W.
1993-01-01
The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Deterministic and Monte Carlo Neutron Transport Calculations of the Dounreay Fast Breeder Reactor
Ziver, A. Kemal; Shahdatullah, Sabu; Eaton, Matthew D.; Oliviera, Cassiano R.E. de; Ackroyd, Ron T.; Umpleby, Adrian P.; Pain, Christopher C.; Goddard, Antony J. H.; Fitzpatrick, James
2004-12-15
A homogenized whole-reactor cylindrical model of the Dounreay Fast Reactor has been constructed using both deterministic and Monte Carlo codes to determine neutron flux distributions inside the core and at various out-of-core components. The principal aim is to predict neutron-induced activation levels using both methods and make comparisons against the measured thermal reaction rates. Neutron transport calculations have been performed for a fixed source using a spatially lumped fission neutron distribution, which has been derived from measurements. The deterministic code used is based on the finite element approximation to the multigroup second-order even-parity neutron transport equation, which is implemented in the EVENT code. The Monte Carlo solutions were obtained using the MCNP4C code, in which neutron cross sections are represented in pointwise (or continuous) form. We have compared neutron spectra at various locations not only to show differences between using multigroup deterministic and continuous energy (point nuclear data) Monte Carlo methods but also to assess neutron-induced activation levels calculated using the spectra obtained from both methods. Results were also compared against experiments that were carried out to determine neutron-induced reaction rates. To determine activation levels, we employed the European Activation Code System FISPACT. We have found that the neutron spectra calculated at various in-core and out-of-core components show some differences, which mainly reflect the use of multigroup and point energy nuclear data libraries and methods employed, but these differences have not resulted in large errors on the calculated activation levels of materials that are important (such as steel components) for decommissioning studies of the reactor. The agreement of calculated reaction rates of thermal neutron detectors such as the {sup 55}Mn(n,{gamma}){sup 56}Mn against measurements was satisfactory.
Araki, Fujio
2012-11-21
The purpose of this study was to investigate the perturbation correction factors and inhomogeneity correction factors (ICFs) for a thin-walled cylindrical ion chamber in a heterogeneous phantom including solid water, lung and bone plastic materials. The perturbation factors due to the replacement of the air cavity, non-water equivalence of the wall and the stem, non-air equivalence of the central electrode and the overall perturbation factor, P(Q), for a cylindrical chamber, in the heterogeneous phantom were calculated with the EGSnrc/Cavity Monte Carlo code for 6 and 15 MV photon beams. The PTW31010 (0.125 cm(3)) chamber was modeled with Monte Carlo simulations, and was used for measurements and calculations of percentage depth ionization (PDI) or percentage depth dose (PDD). ICFs were calculated from the ratio of the product of the stopping power ratios (SPRs) and P(Q) of lung or bone to solid water. Finally, the measured PDIs were converted to PDDs by using ICFs and were compared with those calculated by the Monte Carlo method. The perturbation effect for the ion chamber in lung material is insignificant at 5 × 5 and 10 × 10 cm(2) fields, but the effect needs to be considered under conditions of lateral electron disequilibrium with a 3 × 3 cm(2) field. ICFs in lung varied up to 2% and 4% depending on the field size for 6 and 15 MV, respectively. For bone material, the perturbation effects due to the chamber wall and the stem were more significant at up to 3.5% and 1.6% for 6 MV, respectively. ICFs for bone material were approximately 0.945 and 0.940 for 6 and 15 MV, respectively. The converted PDDs by using ICFs were in good agreement with Monte Carlo calculated PDDs. The chamber perturbation correction and SPRs should strictly be considered for ion chamber dosimetry in heterogeneous media. This is more important for small field dosimetry in lung and bone materials.
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
NASA Astrophysics Data System (ADS)
Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.
2015-11-01
In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.
Single-photon transport and mechanical NOON-state generation in microcavity optomechanics
NASA Astrophysics Data System (ADS)
Ren, Xue-Xin; Li, Hao-Kun; Yan, Meng-Yuan; Liu, Yong-Chun; Xiao, Yun-Feng; Gong, Qihuang
2013-03-01
We investigate the single-photon transport in a single-mode optical fiber coupled to an optomechanical system in the single-photon strong-coupling regime. The single-photon transmission amplitude is analytically obtained with a real-space approach and the effects of cavity and mechanical dissipations are studied via master-equation simulations. Based on the theoretical framework, we further propose a heralded probabilistic scheme to generate mechanical NOON states with arbitrary phonon numbers by measuring the sideband photons. The efficiency and fidelity of the scheme are discussed finally.
NASA Astrophysics Data System (ADS)
Müller, Florian; Jenny, Patrick; Daniel, Meyer
2014-05-01
To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe
Single photon transport along a one-dimensional waveguide with a side manipulated cavity QED system.
Yan, Cong-Hua; Wei, Lian-Fu
2015-04-20
An external mirror coupling to a cavity with a two-level atom inside is put forward to control the photon transport along a one-dimensional waveguide. Using a full quantum theory of photon transport in real space, it is shown that the Rabi splittings of the photonic transmission spectra can be controlled by the cavity-mirror couplings; the splittings could still be observed even when the cavity-atom system works in the weak coupling regime, and the transmission probability of the resonant photon can be modulated from 0 to 100%. Additionally, our numerical results show that the appearance of Fano resonance is related to the strengths of the cavity-mirror coupling and the dissipations of the system. An experimental demonstration of the proposal with the current photonic crystal waveguide technique is suggested.
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2016-10-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.
Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail
2015-08-01
We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min; Yeom, Yeon; Kim, Chan; Brown, Justin; Bolch, Wesley
2017-04-04
A new function to treat tetrahedral-mesh geometry was implemented in the Particle and Heavy Ion Transport code Systems (PHITS). To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
3D electro-thermal Monte Carlo study of transport in confined silicon devices
NASA Astrophysics Data System (ADS)
Mohamed, Mohamed Y.
The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non
Randeniya, S. D.; Taddei, P. J.; Newhauser, W. D.; Yepes, P.
2010-01-01
Monte Carlo simulations of an ocular treatment beam-line consisting of a nozzle and a water phantom were carried out using MCNPX, GEANT4, and FLUKA to compare the dosimetric accuracy and the simulation efficiency of the codes. Simulated central axis percent depth-dose profiles and cross-field dose profiles were compared with experimentally measured data for the comparison. Simulation speed was evaluated by comparing the number of proton histories simulated per second using each code. The results indicate that all the Monte Carlo transport codes calculate sufficiently accurate proton dose distributions in the eye and that the FLUKA transport code has the highest simulation efficiency. PMID:20865141
NASA Astrophysics Data System (ADS)
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
Monte Carlo Modeling of Photon Propagation Reveals Highly Scattering Coral Tissue
Wangpraseurt, Daniel; Jacques, Steven L.; Petrie, Tracy; Kühl, Michael
2016-01-01
Corals are very efficient at using solar radiation, with photosynthetic quantum efficiencies approaching theoretical limits. Here, we investigated potential mechanisms underlying such outstanding photosynthetic performance through extracting inherent optical properties of the living coral tissue and skeleton in a massive faviid coral. Using Monte Carlo simulations developed for medical tissue optics it is shown that for the investigated faviid coral, the coral tissue was a strongly light scattering matrix with a reduced scattering coefficient of μs’ = 10 cm-1 (at 636 nm). In contrast, the scattering coefficient of the coral skeleton was μs’ = 3.4 cm-1, which facilitated the efficient propagation of light to otherwise shaded coral tissue layers, thus supporting photosynthesis in lower tissues. Our study provides a quantification of coral tissue optical properties in a massive faviid coral and suggests a novel light harvesting strategy, where tissue and skeletal optics act in concert to optimize the illumination of the photosynthesizing algal symbionts embedded within the living coral tissue. PMID:27708657
Ballistic transport in one-dimensional random dimer photonic crystals
NASA Astrophysics Data System (ADS)
Cherid, Samira; Bentata, Samir; Zitouni, Ali; Djelti, Radouan; Aziz, Zoubir
2014-04-01
Using the transfer-matrix technique and the Kronig Penney model, we numerically and analytically investigate the effect of short-range correlated disorder in Random Dimer Model (RDM) on transmission properties of the light in one dimensional photonic crystals made of three different materials. Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that one kind of these layers always appear in pairs. It is shown that the one dimensional random dimer photonic crystals support two types of extended modes. By shifting of the dimer resonance toward the host fundamental stationary resonance state, we demonstrate the existence of the ballistic response in these systems.
A Photon Free Method to Solve Radiation Transport Equations
Chang, B
2006-09-05
The multi-group discrete-ordinate equations of radiation transfer is solved for the first time by Newton's method. It is a photon free method because the photon variables are eliminated from the radiation equations to yield a N{sub group}XN{sub direction} smaller but equivalent system of equations. The smaller set of equations can be solved more efficiently than the original set of equations. Newton's method is more stable than the Semi-implicit Linear method currently used by conventional radiation codes.
Vuong, A; Chow, J
2015-06-15
Purpose: The aim of this study is to investigate the variation of bone dose on photon beam energy (keV – MeV) in small-animal irradiation. Dosimetry of homogeneous and inhomogeneous phantoms as per the same mouse computed tomography image set were calculated using the DOSCTP and DOSXYZnrc based on the EGSnrc Monte Carlo code. Methods: Monte Carlo simulations for the homogeneous and inhomogeneous mouse phantom irradiated by a 360 degree photon arc were carried out. Mean doses of the bone tissue in the irradiated volumes were calculated at various photon beam energies, ranging from 50 keV to 1.25 MeV. The effect of bone inhomogeneity was examined through the Inhomogeneous Correction Factor (ICF), a dose ratio of the inhomogeneous to the homogeneous medium. Results: From our Monte Carlo results, higher mean bone dose and ICF were found when using kilovoltage photon beams compared to megavoltage. In beam energies ranging from 50 keV to 200 keV, the bone dose was found maximum at 50 keV, and decreased significantly from 2.6 Gy to 0.55 Gy, when 2 Gy was delivered at the center of the phantom (isocenter). Similarly, the ICF were found decreasing from 4.5 to 1 when the photon beam energy was increased from 50 keV to 200 keV. Both mean bone dose and ICF remained at about 0.5 Gy and 1 from 200 keV to 1.25 MeV with insignificant variation, respectively. Conclusion: It is concluded that to avoid high bone dose in the small-animal irradiation, photon beam energy higher than 200 keV should be used with the ICF close to one, and bone dose comparable to the megavoltage beam where photoelectric effect is not dominant.
Effects of model approximations for electron, hole, and photon transport in swift heavy ion tracks
NASA Astrophysics Data System (ADS)
Rymzhanov, R. A.; Medvedev, N. A.; Volkov, A. E.
2016-12-01
The event-by-event Monte Carlo code, TREKIS, was recently developed to describe excitation of the electron subsystems of solids in the nanometric vicinity of a trajectory of a nonrelativistic swift heavy ion (SHI) decelerated in the electronic stopping regime. The complex dielectric function (CDF) formalism was applied in the used cross sections to account for collective response of a matter to excitation. Using this model we investigate effects of the basic assumptions on the modeled kinetics of the electronic subsystem which ultimately determine parameters of an excited material in an SHI track. In particular, (a) effects of different momentum dependencies of the CDF on scattering of projectiles on the electron subsystem are investigated. The 'effective one-band' approximation for target electrons produces good coincidence of the calculated electron mean free paths with those obtained in experiments in metals. (b) Effects of collective response of a lattice appeared to dominate in randomization of electron motion. We study how sensitive these effects are to the target temperature. We also compare results of applications of different model forms of (quasi-) elastic cross sections in simulations of the ion track kinetics, e.g. those calculated taking into account optical phonons in the CDF form vs. Mott's atomic cross sections. (c) It is demonstrated that the kinetics of valence holes significantly affects redistribution of the excess electronic energy in the vicinity of an SHI trajectory as well as its conversion into lattice excitation in dielectrics and semiconductors. (d) It is also shown that induced transport of photons originated from radiative decay of core holes brings the excess energy faster and farther away from the track core, however, the amount of this energy is relatively small.
NASA Astrophysics Data System (ADS)
van der Kaap, N. J.; Koster, L. J. A.
2016-02-01
A parallel, lattice based Kinetic Monte Carlo simulation is developed that runs on a GPGPU board and includes Coulomb like particle-particle interactions. The performance of this computationally expensive problem is improved by modifying the interaction potential due to nearby particle moves, instead of fully recalculating it. This modification is achieved by adding dipole correction terms that represent the particle move. Exact evaluation of these terms is guaranteed by representing all interactions as 32-bit floating numbers, where only the integers between -222 and 222 are used. We validate our method by modelling the charge transport in disordered organic semiconductors, including Coulomb interactions between charges. Performance is mainly governed by the particle density in the simulation volume, and improves for increasing densities. Our method allows calculations on large volumes including particle-particle interactions, which is important in the field of organic semiconductors.
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K; Brown, Forrest B; Forget, Benoit
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
NASA Astrophysics Data System (ADS)
Shukri, Seyfan Kelil
2017-01-01
We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.
McKinley, M S; Brooks III, E D; Szoke, A
2002-03-20
We compare the Implicit Monte Carlo (IMC) technique to the Symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for examples involving the evolution of a collisionally pumped trapping problem to steady-state, the surface heating of cold media by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
McKinley, M S; Brooks III, E D; Szoke, A
2002-12-03
We compare the Implicit Monte Carlo (IMC) technique to the Symbolic IMC (SIMC) technique, with and without weight vectors in frequency space, for time-dependent line transport in the presence of collisional pumping. We examine the efficiency and accuracy of the IMC and SIMC methods for test problems involving the evolution of a collisionally pumped trapping problem to its steady-state, the surface heating of a cold medium by a beam, and the diffusion of energy from a localized region that is collisionally pumped. The importance of spatial biasing and teleportation for problems involving high opacity is demonstrated. Our numerical solution, along with its associated teleportation error, is checked against theoretical calculations for the last example.
Monte Carlo Simulations of Charge Transport in 2D Organic Photovoltaics.
Gagorik, Adam G; Mohin, Jacob W; Kowalewski, Tomasz; Hutchison, Geoffrey R
2013-01-03
The effect of morphology on charge transport in organic photovoltaics is assessed using Monte Carlo. In isotopic two-phase morphologies, increasing the domain size from 6.3 to 18.3 nm improves the fill factor by 11.6%, a result of decreased tortuosity and relaxation of Coulombic barriers. Additionally, when small aggregates of electron acceptors are interdispersed into the electron donor phase, charged defects form in the system, reducing fill factors by 23.3% on average, compared with systems without aggregates. In contrast, systems with idealized connectivity show a 3.31% decrease in fill factor when domain size was increased from 4 to 64 nm. We attribute this to a decreased rate of exciton separation at donor-acceptor interfaces. Finally, we notice that the presence of Coulomb interactions increases device performance as devices become smaller. The results suggest that for commonly found isotropic morphologies the Coulomb interactions between charge carriers dominates exciton separation effects.
MCNPX Monte Carlo simulations of particle transport in SiC semiconductor detectors of fast neutrons
NASA Astrophysics Data System (ADS)
Sedlačková, K.; Zat'ko, B.; Šagátová, A.; Pavlovič, M.; Nečas, V.; Stacho, M.
2014-05-01
The aim of this paper was to investigate particle transport properties of a fast neutron detector based on silicon carbide. MCNPX (Monte Carlo N-Particle eXtended) code was used in our study because it allows seamless particle transport, thus not only interacting neutrons can be inspected but also secondary particles can be banked for subsequent transport. Modelling of the fast-neutron response of a SiC detector was carried out for fast neutrons produced by 239Pu-Be source with the mean energy of about 4.3 MeV. Using the MCNPX code, the following quantities have been calculated: secondary particle flux densities, reaction rates of elastic/inelastic scattering and other nuclear reactions, distribution of residual ions, deposited energy and energy distribution of pulses. The values of reaction rates calculated for different types of reactions and resulting energy deposition values showed that the incident neutrons transfer part of the carried energy predominantly via elastic scattering on silicon and carbon atoms. Other fast-neutron induced reactions include inelastic scattering and nuclear reactions followed by production of α-particles and protons. Silicon and carbon recoil atoms, α-particles and protons are charged particles which contribute to the detector response. It was demonstrated that although the bare SiC material can register fast neutrons directly, its detection efficiency can be enlarged if it is covered by an appropriate conversion layer. Comparison of the simulation results with experimental data was successfully accomplished.
Monte Carlo simulation of non-conservative positron transport in pure argon
NASA Astrophysics Data System (ADS)
Šuvakov, M.; Petrović, Z. Lj; Marler, J. P.; Buckman, S. J.; Robson, R. E.; Malović, G.
2008-05-01
The main aim of this paper is to apply modern phenomenology and accurate Monte Carlo simulation techniques to obtain the same level of understanding of positron transport as has been achieved for electrons. To this end, a reasonably complete set of cross sections for low energy positron scattering in argon has been used to calculate transport coefficients of low energy positrons in pure argon gas subject to an electrostatic field. We have analyzed the main features of these coefficients and have compared the calculated values with those for electrons in the same gas. The particular focus is on the influence of the non-conservative nature of positronium formation. This effect is substantial, generally speaking much larger than any comparable effects in electron transport due to attachment and/or ionization. As a result several new phenomena have been observed, such as negative differential conductivity (NDC) in the bulk drift velocity, but with no indication of any NDC for the flux drift velocity. In addition, there is a drastic effect on the bulk longitudinal diffusion coefficient for positrons, which is reduced to almost zero, in contrast to the other components of the diffusion tensor, which have normal values. It is found that the best way of explaining these kinetic phenomena is by sampling real space distributions which reveal drastic modification of the usual Gaussian profile due to pronounced spatial differentiation of the positrons by energy.
Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald
2011-06-01
MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I. Carter, S. A.; Zimanyi, G. T.
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5 nm and begin to decrease above 6 nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Górka, B; Nilsson, B; Fernández-Varea, J M; Svensson, R; Brahme, A
2006-08-07
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect
Wang, Lilie L. W.; Beddar, Sam
2011-03-15
Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.
Shin, Younghoon; Kwon, Hyuk-Sang
2016-03-21
We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area.
NASA Astrophysics Data System (ADS)
Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa
2016-01-01
Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated (5~\\text{mm}× 5 mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging.
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Jiang, Runqing
2012-06-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6 MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better
Perturbative and iterative methods for photon transport in one-dimensional waveguides
NASA Astrophysics Data System (ADS)
Obi, Kenechukwu C.; Shen, Jung-Tsung
2015-05-01
The problems of photon transport in one-dimensional waveguides have recently attracted great attentions. We consider the case of single photons scattering off a Λ-type three-level quantum emitter, and discuss the perturbative treatments of the scattering processes in terms of Born approximation for the Lippmann-Schwinger formalism. We show that the iterative Born series of the scattering amplitudes converge to the exact results obtained by other approaches. The generalization of our work provides a foundational basis for efficient computational schemes for photon scattering problems in one-dimensional waveguides.
Single photon transport in two waveguides chirally coupled by a quantum emitter.
Cheng, Mu-Tian; Ma, Xiao-San; Zhang, Jia-Yan; Wang, Bing
2016-08-22
We investigate single photon transport in two waveguides coupled to a two-level quantum emitter (QE). With the deduced analytical scattering amplitudes, we show that under condition of the chiral coupling between the QE and the photon in the two waveguides, the QE can play the role of ideal quantum router to redirect a single photon incident from one waveguide into the other waveguide with a probability of 100% in the ideal condition. The influences of cross coupling between two waveguides and dissipations on the routing are also shown.
NASA Astrophysics Data System (ADS)
Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu
2015-09-01
Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.
Fractional transport and photonic sub-diffusion in aperiodic dielectric metamaterials
NASA Astrophysics Data System (ADS)
Dal Negro, Luca; Wang, Yu; Inampudi, Sandeep
Using rigorous transfer matrix theory and full-vector Finite Difference Time Domain (FDTD) simulations in combination with Wavelet Transform Modulus Maxima analysis of multifractal spectra, we demonstrate all-dielectric aperiodic metamaterial structures that exhibit sub-diffusive photon transport properties that are widely tunable across the near-infrared spectral range. The proposed approach leverages the unprecedented spectral scalability offered by aperiodic photonic systems and demonstrates the possibility of achieving logarithmic Sinai sub-diffusion of photons for the first time. In particular we will show that the control of multifractal energy spectra and critical modes in aperiodic metamaterials with nanoscale dielectric components enables tuning of anomalous optical transport from sub- to super-diffusive dynamics, in close analogy with the electron dynamics in quasi-periodic potentials. Fractional diffusion equations models will be introduced for the efficient modeling of photon sub-diffusive processes in metamaterials and applications to diffraction-free propagation in aperiodic media will be provided. The ability to tailor photon transport phenomena in metamaterials with properties originating from aperiodic geometrical correlations can lead to novel functionalities and active devices that rely on anomalous photon sub-diffusion to control beam collimation and non-resonantly enhance light-matter interaction across multiple spectral bands.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo
Bruno, Domenico; Frezzotti, Aldo Ghiroldi, Gian Pietro
2015-05-15
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
2015-05-01
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300-900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
NASA Astrophysics Data System (ADS)
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the
Vinke, Ruud; Olcott, Peter D.; Cates, Joshua W.; Levin, Craig S.
2014-01-01
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 × 3 × 20 mm3 LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with
Enhancing coherent transport in a photonic network using controllable decoherence
Biggerstaff, Devon N.; Heilmann, René; Zecevik, Aidan A.; Gräfe, Markus; Broome, Matthew A.; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G.; Kassal, Ivan
2016-01-01
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence. PMID:27080915
MONTE CARLO NEUTRINO TRANSPORT THROUGH REMNANT DISKS FROM NEUTRON STAR MERGERS
Richers, Sherwood; Ott, Christian D.; Kasen, Daniel; Fernández, Rodrigo; O’Connor, Evan
2015-11-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 10{sup 46} erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 10{sup 48} erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.
Borg, J; Kawrakow, I; Rogers, D W; Seuntjens, J P
2000-08-01
To develop a primary standard for 192Ir sources, the basic science on which this standard is based, i.e., Spencer-Attix cavity theory, must be established. In the present study Monte Carlo techniques are used to investigate the accuracy of this cavity theory for photons in the energy range from 20 to 1300 keV, since it is usually not applied at energies below that of 137Cs. Ma and Nahum [Phys. Med. Biol. 36, 413-428 (1991)] found that in low-energy photon beams the contribution from electrons caused by photons interacting in the cavity is substantial. For the average energy of the 192Ir spectrum they found a departure from Bragg-Gray conditions of up to 3% caused by photon interactions in the cavity. When Monte Carlo is used to calculate the response of a graphite ion chamber to an encapsulated 192Ir source it is found that it differs by less than 0.3% from the value predicted by Spencer-Attix cavity theory. Based on these Monte Carlo calculations, for cavities in graphite it is concluded that the Spencer-Attix cavity theory with delta = 10 keV is applicable within 0.5% for photon energies at 300 keV or above despite the breakdown of the assumption that there is no interaction of photons within the cavity. This means that it is possible to use a graphite ion chamber and Spencer-Attix cavity theory to calibrate an 192Ir source. It is also found that the use of delta related to the mean chord length instead of delta = 10 keV improves the agreement with Spencer-Attix cavity theory at 60Co from 0.2% to within 0.1% of unity. This is at the level of accuracy of which the Monte Carlo code EGSnrc calculates ion chamber responses. In addition, it is shown that the effects of other materials, e.g., insulators and holders, have a substantial effect on the ion chamber response and should be included in the correction factors for a primary standard of air kerma.
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.
PADOVANI, ENRICO
2012-04-15
Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
Suppression of population transport and control of exciton distributions by entangled photons.
Schlawin, Frank; Dorfman, Konstantin E; Fingerhut, Benjamin P; Mukamel, Shaul
2013-01-01
Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty.
Suppression of population transport and control of exciton distributions by entangled photons
Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul
2013-01-01
Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty. PMID:23653194
Chow, J; Owrangi, A
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
NASA Astrophysics Data System (ADS)
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
Majaron, Boris; Milanič, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
Detailed Monte Carlo Simulation of electron transport and electron energy loss spectra.
Attarian Shandiz, M; Salvat, F; Gauvin, R
2016-11-01
A computer program for detailed Monte Carlo simulation of the transport of electrons with kinetic energies in the range between about 0.1 and about 500 keV in bulk materials and in thin solid films is presented. Elastic scattering is described from differential cross sections calculated by the relativistic (Dirac) partial-wave expansion method with different models of the scattering potential. Inelastic interactions are simulated from an optical-data model based on an empirical optical oscillator strength that combines optical functions of the solid with atomic photoelectric data. The generalized oscillator strength is built from the adopted optical oscillator strength by using an extension algorithm derived from Lindhard's dielectric function for a free-electron gas. It is shown that simulated backscattering fractions of electron beams from bulk (semi-infinite) specimens are in good agreement with experimental data for beam energies from 0.1 keV up to about 100 keV. Simulations also yield transmitted and backscattered fractions of electron beams on thin solid films that agree closely with measurements for different film thicknesses and incidence angles. Simulated most probable deflection angles and depth-dose distributions also agree satisfactorily with measurements. Finally, electron energy loss spectra of several elemental solids are simulated and the effects of the beam energy and the foil thickness on the signal to background and signal to noise ratios are investigated. SCANNING 38:475-491, 2016. © 2015 Wiley Periodicals, Inc.
Monte Carlo Simulation of Atmospheric Neutron Transport at High Altitudes Using MCNP
1990-08-01
interaction data, (2) discrete reaction neutron interaction data, (3) multigroup neutron interaction data, (4) continuous photon interaction data and (5... multigroup photon interaction data. In neutron - only and coupled neutron /photon problems, one continuous-energy, multigroup or discrete reaction...as histograms rather than as continuous curves. The multigroup tables have been derived from the same sources as the other neutron interaction tables
Chow, J; Grigor, G
2014-08-15
This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230% larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.
NASA Astrophysics Data System (ADS)
Ishmael Parsai, E.; Pearson, David; Kvale, Thomas
2007-08-01
An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.
Kinetic Monte Carlo model of charge transport in hematite (α-Fe2O3)
NASA Astrophysics Data System (ADS)
Kerisit, Sebastien; Rosso, Kevin M.
2007-09-01
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parametrized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ˜2% of iron(III) sites, whereas the presence of
Kinetic Monte Carlo Model of Charge Transport in Hematite (α-Fe2O3)
Kerisit, Sebastien N.; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of
Chow, James C.L.; Owrangi, Amir M.
2012-07-01
Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 Multiplication-Sign 1 cm{sup 2}) with gantry angles equal to 0 Degree-Sign , 90 Degree-Sign , and 180 Degree-Sign , and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0 Degree-Sign . The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180 Degree-Sign , but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized.
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
Photon transport in a one-dimensional nanophotonic waveguide QED system
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Zeng, Xiaodong; Nha, Hyunchul; Zubairy, M. Suhail
2016-06-01
The waveguide quantum electrodynamics (QED) system may have important applications in quantum device and quantum information technology. In this article we review the methods being proposed to calculate photon transport in a one-dimensional (1D) waveguide coupled to quantum emitters. We first introduce the Bethe ansatz approach and the input-output formalism to calculate the stationary results of a single photon transport. Then we present a dynamical time-dependent theory to calculate the real-time evolution of the waveguide QED system. In the longtime limit, both the stationary theory and the dynamical calculation give the same results. Finally, we also briefly discuss the calculations of the multiphoton transport problems.
Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method
NASA Astrophysics Data System (ADS)
Yamada, Shoichi; Janka, Hans-Thomas; Suzuki, Hideyuki
1999-04-01
We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail the dependence of the results on the numbers of radial, angular, and energy grid points and the way to discretize the spatial advection term which is used in the Boltzmann solver. A general relativistic calculation has been done for one of the models. We find good overall agreement between the two methods. This gives credibility to both methods which are based on completely different formulations. In particular, the number and energy fluxes and the mean energies of the neutrinos show remarkably good agreement, because these quantities are determined in a region where the angular distribution of the neutrinos is nearly isotropic and they are essentially frozen in later on. On the other hand, because of a relatively small number of angular grid points (which is inevitable due to limitations of the computation time) the Boltzmann solver tends to slightly underestimate the flux factor and the Eddington factor outside the (mean) ``neutrinosphere'' where the angular distribution of the neutrinos becomes highly anisotropic. As a result, the neutrino number (and energy) density is somewhat overestimated in this region. This fact suggests that the Boltzmann solver should be applied to calculations of the neutrino heating in the hot-bubble region with some caution because there might be a tendency to overestimate the energy deposition rate in disadvantageous situations. A comparison shows that this trend is opposite to the results obtained with a multi-group flux-limited diffusion approximation of neutrino transport. Employing three different flux limiters, we find that all of them lead to a significant
Procassini, R J; Beck, B R
2004-12-07
It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results.
NASA Astrophysics Data System (ADS)
Wang, Zi-Qing; Wang, Guo-Dong; Shen, Wei-Bo
2010-10-01
Multimotor transport is studied by Monte-Carlo simulation with consideration of motor detachment from the filament. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor numbers depending on the single motor force-velocity curve. The stall force and run-length reduced greatly compared to other models. Especially in the case of low ATP concentrations, the stall force of multi motor transport even smaller than the single motor's stall force.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Ding, George X; Duzenli, Cheryl; Kalach, Nina I
2002-09-07
This study presents measured neutron dose using a neutron dosimeter in a water phantom and investigates a hypothesis that neutrons in a high-energy photon beam may be responsible for the reported significant dose discrepancies between Monte Carlo calculations and measurements at the build-up region in large fields. Borated polyethylene slabs were inserted between the accelerator head and the phantom in order to remove neutrons generated in the accelerator head. The thickness of the slab ranged from 2.5 cm to 10 cm. A lead slab of 3 mm thickness was also used in the study. The superheated drop neutron dosimeter was used to measure the depth-dose curve of neutrons in a high-energy photon beam and to verify the effectiveness of the slab to remove these neutrons. Total dose measurements were performed in water using a WELLHOFER WP700 beam scanner with an IC-10 ionization chamber. The Monte Carlo code BEAM was used to simulate an 18 MV photon beam from a Varian Clinac-2100EX accelerator. Both EGS4/DOSXYZ and EGSnrc/DOSRZnrc were used in the dose calculations. Measured neutron dose equivalents as a function of depth per unit total dose in water were presented for 10 x 10 and 40 x 40 cm2 fields. The measured results have shown that a 5-10 cm thick borated polyethylene slab can reduce the neutron dose by a factor of 2 when inserted between the accelerator head and the detector. In all cases the measured neutron dose equivalent was less than 0.5% of the photon dose. In order to study if the ion chamber was highly sensitive to the neutron dose, we have investigated the disagreement between the Monte Carlo calculated and measured central-axis depth-dose curves in the build-up region when different shielding materials were used. The result indicated that the IC-10 chamber was not highly sensitive to the neutron dose. Therefore, neutrons present in a high-energy photon beam were unlikely to be responsible for the reported discrepancies in the build-up region for large fields.
Monte Carlo simulation of ion transport of the high strain ionomer with conducting powder electrodes
NASA Astrophysics Data System (ADS)
He, Xingxi; Leo, Donald J.
2007-04-01
The transport of charge due to electric stimulus is the primary mechanism of actuation for a class of polymeric active materials known as ionomeric polymer transducers (IPT). At low frequency, strain response is strongly related to charge accumulation at the electrodes. Experimental results demonstrated using conducting powder, such as single-walled carbon nanotubes (SWNT), polyaniline (PANI) powders, high surface area RuO II, carbon black electrodes etc. as an electrode increases the mechanical deformation of the IPT by increasing the capacitance of the material. In this paper, Monte Carlo simulation of a two-dimensional ion hopping model has been built to describe ion transport in the IPT. The shape of the conducting powder is assumed to be a sphere. A step voltage is applied between the electrodes of the IPT, causing the thermally-activated hopping between multiwell energy structures. Energy barrier height includes three parts: the energy height due to the external electric potential, intrinsic energy, and the energy height due to ion interactions. Finite element method software-ANSYS is employed to calculate the static electric potential distribution inside the material with the powder sphere in varied locations. The interaction between ions and the electrodes including powder electrodes is determined by using the method of images. At each simulation step, the energy of each cation is updated to compute ion hopping rate which directly relates to the probability of an ion moving to its neighboring site. Simulation ends when the current drops to constant zero. Periodic boundary conditions are applied when ions hop in the direction perpendicular to the external electric field. When an ion is moved out of the simulation region, its corresponding periodic replica enters from the opposite side. In the direction of the external electric field, parallel programming is achieved in C augmented with functions that perform message-passing between processors using Message
Photon-assisted transport in bilayer graphene flakes
NASA Astrophysics Data System (ADS)
Zambrano, D.; Rosales, L.; Latgé, A.; Pacheco, M.; Orellana, P. A.
2017-01-01
The electronic conductance of graphene-based bilayer flake systems reveals different quantum interference effects, such as Fabry-Pérot resonances and sharp Fano antiresonances on account of competing electronic paths through the device. These properties may be exploited to obtain spin-polarized currents when the same nanostructure is deposited above a ferromagnetic insulator. Here, we study how the spin-dependent conductance is affected when a time-dependent gate potential is applied to the bilayer flake. Following a Tien-Gordon formalism, we explore how to modulate the transport properties of such systems via appropriate choices of the ac-field gate parameters. The presence of an oscillating field opens the possibility of tuning the original antiresonances for a large set of field parameters. We show that interference patterns can be partially or fully removed by the time-dependent gate voltage. The results are reflected in the corresponding weighted spin polarization, which can reach maximum values for a given spin component. We found that differential conductance maps as functions of bias and gate potentials show interference patterns for different ac-field parameter configurations. The proposed bilayer graphene flake systems may be used as a frequency detector in the THz range.
Walsh, J. A.; Palmer, T. S.; Urbatsch, T. J.
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Thermal effects on photon-induced quantum transport in a single quantum dot.
Assunção, M O; de Oliveira, E J R; Villas-Bôas, J M; Souza, F M
2013-04-03
We theoretically investigate laser induced quantum transport in a single quantum dot attached to electrical contacts. Our approach, based on a nonequilibrium Green function technique, allows us to include thermal effects on the photon-induced quantum transport and excitonic dynamics, enabling the study of non-Markovian effects. By solving a set of coupled integrodifferential equations, involving correlation and propagator functions, we obtain the photocurrent and the dot occupation as a function of time. Two distinct sources of decoherence, namely, incoherent tunneling and thermal fluctuations, are observed in the Rabi oscillations. As temperature increases, a thermally activated Pauli blockade results in a suppression of these oscillations. Additionally, the interplay between photon and thermally induced electron populations results in a switch of the current sign as time evolves and its stationary value can be maximized by tuning the laser intensity.
Eigen decomposition solution to the one-dimensional time-dependent photon transport equation.
Handapangoda, Chintha C; Pathirana, Pubudu N; Premaratne, Malin
2011-02-14
The time-dependent one-dimensional photon transport (radiative transfer) equation is widely used to model light propagation through turbid media with a slab geometry, in a vast number of disciplines. Several numerical and semi-analytical techniques are available to accurately solve this equation. In this work we propose a novel efficient solution technique based on eigen decomposition of the vectorized version of the photon transport equation. Using clever transformations, the four variable integro-differential equation is reduced to a set of first order ordinary differential equations using a combination of a spectral method and the discrete ordinates method. An eigen decomposition approach is then utilized to obtain the closed-form solution of this reduced set of ordinary differential equations.
Monte Carlo next-event estimates from thermal collisions
Hendricks, J.S.; Prael, R.E.
1990-01-01
A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in Monte Carlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production Monte Carlo neutron and photon radiation transport code MCNP. 9 refs.
Griesheimer, D. P.; Stedry, M. H.
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Müller, Florian Jenny, Patrick Meyer, Daniel W.
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
NASA Astrophysics Data System (ADS)
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
NASA Astrophysics Data System (ADS)
Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson
2012-02-01
Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).
NASA Astrophysics Data System (ADS)
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-10-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A; Choi, Kyoung-Shin
2015-10-26
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.
Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin
2015-01-01
n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984
NASA Astrophysics Data System (ADS)
Jadach, S.; Ward, B. F. L.
1996-07-01
We present the theoretical basis and sample Monte Carlo data for the YFS exponentiated O(α) calculation of polarized Mo/ller scattering at c.m.s. energies large compared to 2me. Both longitudinal and transverse polarizations are discussed. Possible applications to Mo/ller polarimetry at the SLD are thus illustrated.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Liu, Baoshun; Li, Ziqiang; Zhao, Xiujian
2015-02-21
In this research, Monte-Carlo Continuity Random Walking (MC-RW) model was used to study the relation between electron transport and photocatalysis of nano-crystalline (nc) clusters. The effects of defect energy disorder, spatial disorder of material structure, electron density, and interfacial transfer/recombination on the electron transport and the photocatalysis were studied. Photocatalytic activity is defined as 1/τ from a statistical viewpoint with τ being the electron average lifetime. Based on the MC-RW simulation, a clear physical and chemical "picture" was given for the photocatalytic kinetic analysis of nc-clusters. It is shown that the increase of defect energy disorder and material spatial structural disorder, such as the decrease of defect trap number, the increase of crystallinity, the increase of particle size, and the increase of inter-particle connection, can enhance photocatalytic activity through increasing electron transport ability. The increase of electron density increases the electron Fermi level, which decreases the activation energy for electron de-trapping from traps to extending states, and correspondingly increases electron transport ability and photocatalytic activity. Reducing recombination of electrons and holes can increase electron transport through the increase of electron density and then increases the photocatalytic activity. In addition to the electron transport, the increase of probability for electrons to undergo photocatalysis can increase photocatalytic activity through the increase of the electron interfacial transfer speed.
Schaefer, C; Jansen, A P J
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Schaefer, C.; Jansen, A. P. J.
2013-02-07
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
NASA Astrophysics Data System (ADS)
Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie
2012-09-01
Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.
Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad
2015-02-24
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
Cascaded two-photon spectroscopy of Yb atoms with a transportable effusive atomic beam apparatus
NASA Astrophysics Data System (ADS)
Song, Minsoo; Yoon, Tai Hyun
2013-02-01
We present a transportable effusive atomic beam apparatus for cascaded two-photon spectroscopy of the dipole-forbidden transition (6s2 1S0↔ 6s7s 1S0) of Yb atoms. An ohmic-heating effusive oven is designed to have a reservoir volume of 1.6 cm3 and a high degree of atomic beam collimation angle of 30 mrad. The new atomic beam apparatus allows us to detect the spontaneously cascaded two-photons from the 6s7s1S0 state via the intercombination 6s6p3P1 state with a high signal-to-noise ratio even at the temperature of 340 °C. This is made possible in our apparatus because of the enhanced atomic beam flux and superior detection solid angle.
Bahreyni Toossi, Mohammad Taghi; Behmadi, Marziyeh; Ghorbani, Mahdi; Gholamhosseinian, Hamid
2013-09-06
Several investigators have pointed out that electron and neutron contamination from high-energy photon beams are clinically important. The aim of this study is to assess electron and neutron contamination production by various prostheses in a high-energy photon beam of a medical linac. A 15 MV Siemens PRIMUS linac was simulated by MCNPX Monte Carlo (MC) code and the results of percentage depth dose (PDD) and dose profile values were compared with the measured data. Electron and neutron contaminations were calculated on the beam's central axis for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti hip prostheses through MC simulations. Dose increase factor (DIF) was calculated as the ratio of electron (neutron) dose at a point for 10 × 10 cm² field size in presence of prosthesis to that at the same point in absence of prosthesis. DIF was estimated at different depths in a water phantom. Our MC-calculated PDD and dose profile data are in good agreement with the corresponding measured values. Maximum dose increase factor for electron contamination for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti prostheses were equal to 1.18, 1.16, 1.16, and 1.14, respectively. The corresponding values for neutron contamination were respectively equal to: 184.55, 137.33, 40.66, and 43.17. Titanium-based prostheses are recommended for the orthopedic practice of hip junction replacement. When treatment planning for a patient with hip prosthesis is performed for a high-energy photon beam, attempt should be made to ensure that the prosthesis is not exposed to primary photons.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Program EPICP: Electron photon interaction code, photon test module. Version 94.2
Cullen, D.E.
1994-09-01
The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.
Multi-core performance studies of a Monte Carlo neutron transport code
Siegel, A. R.; Smith, K.; Romano, P. K.; Forget, B.; Felker, K. G.
2013-07-14
Performance results are presented for a multi-threaded version of the OpenMC Monte Carlo neutronics code using OpenMP in the context of nuclear reactor criticality calculations. Our main interest is production computing, and thus we limit our approach to threading strategies that both require reasonable levels of development effort and preserve the code features necessary for robust application to real-world reactor problems. Several approaches are developed and the results compared on several multi-core platforms using a popular reactor physics benchmark. A broad range of performance studies are distilled into a simple, consistent picture of the empirical performance characteristics of reactor Monte Carlo algorithms on current multi-core architectures.
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-21
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
NASA Astrophysics Data System (ADS)
Filges, Detlef
1992-04-01
The application potential of modern particle radiation transport simulations to solve safety and radiation protection problems in accelerator technology and in space science and technology is presented. It is shown to what extent Monte Carlo simulation is helpful in defining safety regulations, safety standards and in determining corresponding safety proofs. For this purpose the basic methods are described and their performance together with particular examples are assessed. The state of the art of the computational methods is described together with the existing computer codes. The details of the current three dimensional geometry packages are also given. Necessary nuclear transport data and reaction cross sections in respect to shielding and safety protection problems are investigated. The performance and flexibility of HERMES (High Energy Radiation Monte Carlo Elaborate System), and its utilization in solving safety protection problems are demonstrated. Examples of radiation protection and high energy source shielding for medium and high energy particle accelerators and for high current particle accelerator target systems are summarized. Radiation protection and shielding of space vehicles against cosmic ray radiation are compiled and assessed. Validations of experimental results are also given.
Jadach, S. |; Ward, B.F. |
1996-07-01
We present the theoretical basis and sample Monte Carlo data for the YFS exponentiated {ital O}({alpha}) calculation of polarized Mo/ller scattering at c.m.s. energies large compared to 2{ital m}{sub {ital e}}. Both longitudinal and transverse polarizations are discussed. Possible applications to Mo/ller polarimetry at the SLD are thus illustrated. {copyright} {ital 1996 The American Physical Society.}
Result of Monte-Carlo simulation of electron-photon cascades in lead and layers of lead-scintillator
NASA Technical Reports Server (NTRS)
Wasilewski, A.; Krys, E.
1985-01-01
Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.
A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
Gentile, N A; Trahan, T J
2011-03-22
We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.
GOORLEY, TIM
2013-07-16
Version 01 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
GOORLEY, TIM
2013-07-16
Version 00 US DOE 10CFR810 Jurisdiction. MCNP6 is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. MCNP6 represents the culmination of a multi-year effort to merge the MCNP5 [X-503] and MCNPX [PEL11] codes into a single product comprising all features of both. For those familiar with previous versions of MCNP, you will discover the code has been expanded to handle a multitude of particles and to include model physics options for energies above the cross-section table range, a material burnup feature, and delayed particle production. Expanded and/or new tally, source, and variance-reduction options are available to the user as well as an improved plotting capability. The capability to calculate keff eigenvalues for fissile systems remains a standard feature. Although MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, the result is much more than the sum of these two computer codes. MCNP6 is the result of five years of effort by the MCNP5 and MCNPX code development teams. These groups of people, residing in the Los Alamos National Laboratory's (LANL) X Computational Physics Division, Monte Carlo Codes Group (XCP-3), and Nuclear Engineering and Nonproliferation Division, Systems Design and Analysis Group (NEN-5, formerly D-5), have combined their code development efforts to produce the next evolution of MCNP. While maintenance and bug fixes will continue for MCNP5 v.1.60 and MCNPX v.2.7.0 for upcoming years, new code development capabilities will be developed and released only in MCNP6. In fact, this initial production release of MCNP6 (v. 1.0) contains 16 new features not previously found in either code. These new features include (among others) the abilities to import unstructured mesh geometries from the finite element code Abaqus, to transport photons down to 1.0 eV, to model complete atomic
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
Charge Transport in Two-Photon Semiconducting Structures for Solar Fuels.
Liu, Guohua; Du, Kang; Haussener, Sophia; Wang, Kaiying
2016-10-20
Semiconducting heterostructures are emerging as promising light absorbers and offer effective electron-hole separation to drive solar chemistry. This technology relies on semiconductor composites or photoelectrodes that work in the presence of a redox mediator and that create cascade junctions to promote surface catalytic reactions. Rational tuning of their structures and compositions is crucial to fully exploit their functionality. In this review, we describe the possibilities of applying the two-photon concept to the field of solar fuels. A wide range of strategies including the indirect combination of two semiconductors by a redox couple, direct coupling of two semiconductors, multicomponent structures with a conductive mediator, related photoelectrodes, as well as two-photon cells are discussed for light energy harvesting and charge transport. Examples of charge extraction models from the literature are summarized to understand the mechanism of interfacial carrier dynamics and to rationalize experimental observations. We focus on a working principle of the constituent components and linking the photosynthetic activity with the proposed models. This work gives a new perspective on artificial photosynthesis by taking simultaneous advantages of photon absorption and charge transfer, outlining an encouraging roadmap towards solar fuels.
Chiavassa, Sophie; Buge, François; Hervé, Chloé; Delpon, Gregory; Rigaud, Jérome; Lisbona, Albert; Supiot, Sthéphane
2015-12-01
The aim of this study was to evaluate the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy (IORT) in pelvic area. A GATE Monte Carlo model of the INTRABEAM® was adapted for the study. Simulations were performed in the CT scan of a cadaver considering a homogeneous segmentation (water) and an inhomogeneous segmentation (5 tissues from ICRU44). Measurements were performed in the cadaver using EBT3 Gafchromic® films. Impact of inhomogeneities on dose calculation in cadaver was 6% for soft tissues and greater than 300% for bone tissues. EBT3 measurements showed a better agreement with calculation for inhomogeneous media. However, dose discrepancy in soft tissues led to a sub-millimeter (0.65 mm) shift in the effective point dose in depth. Except for bone tissues, the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area was not significant for the studied anatomy.
NASA Astrophysics Data System (ADS)
Ezzati, A. O.; Sohrabpour, M.
2013-02-01
In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.
Glaser, A; Zhang, R; Gladstone, D; Pogue, B
2014-06-01
Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.
Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo
2010-02-15
Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.
1990-01-01
Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.
Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks
Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit; Boozer, Allen H.
2012-04-11
Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.
Wheeler, F.J.; Wessol, D.E.
1995-12-31
The rtt-MC dose calculation module of the BNCT-Rtpe treatment planning system has been developed specifically for boron neutron cancer therapy. Due to the complicated nature of combined gamma, fast-, epithermal- and thermal-energy neutron transport in tissue, all approaches to treatment planning to date for this treatment modality rely on Monte Carlo or three-dimensional discrete ordinates methods. Simple, fast and accurate methods for this modality have simply not been developed. In this paper the authors discuss some of the unique attributes of this therapy and the approaches they have used to begin to merge into clinical applications. As this paper is under draft, the modern implementation of boron neutron cancer therapy in the US is being realized. Research of skin and tumor effect for superficial melanoma of the extremities has been initiated at the Massachusetts Institute of Technology and brain cancer therapy (using this planning system) has begun at Brookhaven National Laboratory.
Alexandrakis, G; Farrell, T J; Patterson, M S
2000-05-01
We propose a hybrid Monte Carlo (MC) diffusion model for calculating the spatially resolved reflectance amplitude and phase delay resulting from an intensity-modulated pencil beam vertically incident on a two-layer turbid medium. The model combines the accuracy of MC at radial distances near the incident beam with the computational efficiency afforded by a diffusion calculation at further distances. This results in a single forward calculation several hundred times faster than pure MC, depending primarily on model parameters. Model predictions are compared with MC data for two cases that span the extremes of physiologically relevant optical properties: skin overlying fat and skin overlying muscle, both in the presence of an exogenous absorber. It is shown that good agreement can be achieved for radial distances from 0.5 to 20 mm in both cases. However, in the skin-on-muscle case the choice of model parameters and the definition of the diffusion coefficient can lead to some interesting discrepancies.
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
FILGES, DETLEF
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domain of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up
Sikora, M; Dohm, O; Alber, M
2007-08-07
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification
Lee, C; Badal, A
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
A POD reduced order model for resolving angular direction in neutron/photon transport problems
Buchan, A.G.; Calloo, A.A.; Goffin, M.G.; Dargaville, S.; Fang, F.; Pain, C.C.; Navon, I.M.
2015-09-01
This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.
{delta}f Monte Carlo calculation of neoclassical transport in perturbed tokamaks
Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit J.; Boozer, Allen H.
2012-08-15
Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new {delta}f particle orbit code (POCA) has been developed for this purpose using a modified pitch-angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in the axisymmetric configuration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and the results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic field spectrum, and compared with the combined and 1/{nu} NTV theory. Calculations indicate a clear {delta}B{sup 2} scaling of NTV, and good agreement with the theory on NTV torque profiles and amplitudes depending on collisionality.
Yani, Sitti; Dirgayussa, I Gde E.; Haryanto, Freddy; Arif, Idam; Rhani, Moh. Fadhillah
2015-09-30
Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm{sup 3}, 1 × 1 × 0.5 cm{sup 3}, and 1 × 1 × 0.8 cm{sup 3}. The 1 × 10{sup 9} histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in d{sub max} from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm{sup 3} about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm{sup 3} about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.
NASA Astrophysics Data System (ADS)
Mühlbacher, Lothar; Ankerhold, Joachim
2005-05-01
Electron transfer (ET) across molecular chains including an impurity is studied based on a recently improved real-time path-integral Monte Carlo (PIMC) approach [L. Mühlbacher, J. Ankerhold, and C. Escher, J. Chem. Phys. 121 12696 (2004)]. The reduced electronic dynamics is studied for various bridge lengths and defect site energies. By determining intersite hopping rates from PIMC simulations up to moderate times, the relaxation process in the extreme long-time limit is captured within a sequential transfer model. The total transfer rate is extracted and shown to be enhanced for certain defect site energies. Superexchange turns out to be relevant for extreme gap energies only and then gives rise to different dynamical signatures for high- and low-lying defects. Further, it is revealed that the entire bridge compound approaches a steady state on a much shorter time scale than that related to the total transfer. This allows for a simplified description of ET along donor-bridge-acceptor systems in the long-time range.
NASA Astrophysics Data System (ADS)
Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang
2011-07-01
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
NASA Astrophysics Data System (ADS)
García-García, J.; Martín, F.; Oriols, X.; Suñé, J.
Because of its high switching speed, low power consumption and reduced complexity to implement a given function, resonant tunneling diodes (RTD's) have been recently recognized as excellent candidates for digital circuit applications [1]. Device modeling and simulation is thus important, not only to understand mesoscopic transport properties, but also to provide guidance in optimal device design and fabrication. Several approaches have been used to this end. Among kinetic models, those based on the non-equilibrium Green function formalism [2] have gained increasing interest due to their ability to incorporate coherent and incoherent interactions in a unified formulation. The Wigner distribution function approach has been also extensively used to study quantum transport in RTD's [3-6]. The main limitations of this formulation are the semiclassical treatment of carrier-phonon interactions by means of the relaxation time approximation and the huge computational burden associated to the self-consistent solution of Liouville and Poisson equations. This has imposed severe limitations on spatial domains, these being too small to succeed in the development of reliable simulation tools. Based on the Wigner function approach, we have developed a simulation tool that allows to extend the simulation domains up to hundreds of nanometers without a significant increase in computer time [7]. This tool is based on the coupling between the Wigner distribution function (quantum Liouville equation) and the Boltzmann transport equation. The former is applied to the active region of the device including the double barrier, where quantum effects are present (quantum window, QW). The latter is solved by means of a Monte Carlo algorithm and applied to the outer regions of the device, where quantum effects are not expected to occur. Since the classical Monte Carlo algorithm is much less time consuming than the discretized version of the Wigner transport equation, we can considerably
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
NASA Astrophysics Data System (ADS)
Jaradat, Adnan Khalaf
The x ray leakage from the housing of a therapy x ray source is regulated to be <0.1% of the useful beam exposure at a distance of 1 m from the source. The x ray leakage in the backward direction has been measured from linacs operating at 4, 6, 10, 15, and 18 MV using a 100 cm3 ionization chamber and track-etch detectors. The leakage was measured at nine different positions over the rear wall using a 3 x 3 matrix with a 1 m separation between adjacent positions. In general, the leakage was less than the canonical value, but the exact value depends on energy, gantry angle, and measurement position. Leakage at 10 MV for some positions exceeded 0.1%. Electrons with energy greater than about 9 MeV have the ability to produce neutrons. Neutron leakage has been measured around the head of electron accelerators at a distance 1 m from the target at 0°, 46°, 90°, 135°, and 180° azimuthal angles; for electron energies of 9, 12, 15, 16, 18, and 20 MeV and 10, 15, and 18 MV x ray photon beam, using a neutron bubble detector of type BD-PND and using Track-Etch detectors. The highest neutron dose equivalent per unit electron dose was at 0° for all electron energies. The neutron leakage from photon beams was the highest between all the machines. Intensity modulated radiation therapy (IMRT) delivery consists of a summation of small beamlets having different weights that make up each field. A linear accelerator room designed exclusively for IMRT use would require different, probably lower, tenth value layers (TVL) for determining the required wall thicknesses for the primary barriers. The first, second, and third TVL of 60Co gamma rays and photons from 4, 6, 10, 15, and 18 MV x ray beams by concrete have been determined and modeled using a Monte Carlo technique (MCNP version 4C2) for cone beams of half-opening angles of 0°, 3°, 6°, 9°, 12°, and 14°.
Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung
2012-08-07
The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is
Mosleh-Shirazi, Mohammad Amin; Karbasi, Sareh; Shahbazi-Gahrouei, Daryoush; Monadi, Shahram
2012-11-08
Full buildup diodes can cause significant dose perturbation if they are used on most or all of radiotherapy fractions. Given the importance of frequent in vivo measurements in complex treatments, using thin buildup (low-perturbation) diodes instead is gathering interest. However, such diodes are strictly unsuitable for high-energy photons; therefore, their use requires evaluation and careful measurement of correction factors (CFs). There is little published data on such factors for low-perturbation diodes, and none on diode characterization for 9 MV X-rays. We report on MCNP4c Monte Carlo models of low-perturbation (EDD5) and medium-perturbation (EDP10) diodes, and a comparison of source-to-surface distance, field size, temperature, and orientation CFs for cobalt-60 and 9 MV beams. Most of the simulation results were within 4% of the measurements. The results suggest against the use of the EDD5 in axial angles beyond ± 50° and exceeding the range 0° to +50° tilt angle at 9 MV. Outside these ranges, although the EDD5 can be used for accurate in vivo dosimetry at 9 MV, its CF variations were found to be 1.5-7.1 times larger than the EDP10 and, therefore, should be applied carefully. Finally, the MCNP diode models are sufficiently reliable tools for independent verification of potentially inaccurate measurements.
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2015-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV.
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
Rhoades, W.A.; Simpson, D.B.
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
NASA Astrophysics Data System (ADS)
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
FRC equilibrium reconstruction by Bayesian evaluation of Monte Carlo transport simulations
NASA Astrophysics Data System (ADS)
Rath, Nikolaus; Onofri, M.; Trask, E.; TAE Team
2016-10-01
Beam-driven field reversed configurations (FRCs) can be sustained for multiple ms. Many important properties of such FRCs can not be measured directly. When such properties are needed to guide experiments, they are either substituted by proxies (e.g. the excluded flux radius rΔΦ is used instead of the separatrix radius rs), or derived from other measurements by imposing specific models (e.g. < T > is computed from ∫ n dl by assuming B2 nT). With increasing fast ion population these methods become increasingly inaccurate, so a third method has been developed. Transport simulations are run with a variety of model parameters and snapshots saved periodically, resulting in a pool of feasible plasma states. Synthetic measurements for each state are compared with experimental measurements, giving a probability distribution of states for each time-point in the experiment. Properties like rs and T are taken from the simulated state most consistent with measurements. The probability distribution gives a measure of the uncertainty in each parameter. The method validated by comparison with independent measurements. Reconstruction takes seconds per time-point. Tri Alpha Energy, Inc.
Fung, E-Dean; Adak, Olgun; Lovat, Giacomo; Scarabelli, Diego; Venkataraman, Latha
2017-02-08
We investigate light-induced conductance enhancement in single-molecule junctions via photon-assisted transport and hot-electron transport. Using 4,4'-bipyridine bound to Au electrodes as a prototypical single-molecule junction, we report a 20-40% enhancement in conductance under illumination with 980 nm wavelength radiation. We probe the effects of subtle changes in the transmission function on light-enhanced current and show that discrete variations in the binding geometry result in a 10% change in enhancement. Importantly, we prove theoretically that the steady-state behavior of photon-assisted transport and hot-electron transport is identical but that hot-electron transport is the dominant mechanism for optically induced conductance enhancement in single-molecule junctions when the wavelength used is absorbed by the electrodes and the hot-electron relaxation time is long. We confirm this experimentally by performing polarization-dependent conductance measurements of illuminated 4,4'-bipyridine junctions. Finally, we perform lock-in type measurements of optical current and conclude that currents due to laser-induced thermal expansion mask optical currents. This work provides a robust experimental framework for studying mechanisms of light-enhanced transport in single-molecule junctions and offers tools for tuning the performance of organic optoelectronic devices by analyzing detailed transport properties of the molecules involved.
Douglass, Michael; Bezak, Eva; Penfold, Scott
2013-07-15
Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p
Nadar, M Y; Akar, D K; Patni, H K; Singh, I S; Mishra, L; Rao, D D; Pradeepkumar, K S
2014-12-01
In case of internal contamination due to long-lived actinides by inhalation or injection pathway, a major portion of activity will be deposited in the skeleton and liver over a period of time. In this study, calibration factors (CFs) of Phoswich and an array of HPGe detectors are estimated using skull and knee voxel phantoms. These phantoms are generated from International Commission of Radiation Protection reference male voxel phantom. The phantoms as well as 20 cm diameter phoswich, having 1.2 cm thick NaI (Tl) primary and 5cm thick CsI (Tl) secondary detector and an array of three HPGe detectors (each of diameter of 7 cm and thickness of 2.5 cm) are incorporated in Monte Carlo code 'FLUKA'. Biokinetic models of Pu, Am, U and Th are solved using default parameters to identify different parts of the skeleton where activity will accumulate after an inhalation intake of 1 Bq. Accordingly, CFs are evaluated for the uniform source distribution in trabecular bone and bone marrow (TBBM), cortical bone (CB) as well as in both TBBM and CB regions for photon energies of 18, 60, 63, 74, 93, 185 and 238 keV describing sources of (239)Pu, (241)Am, (238)U, (235)U and (232)Th. The CFs are also evaluated for non-uniform distribution of activity in TBBM and CB regions. The variation in the CFs for source distributed in different regions of the bones is studied. The assessment of skeletal activity of actinides from skull and knee activity measurements is discussed along with the errors.
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
Kinetic Monte Carlo model of charge transport in hematite ({alpha}-Fe{sub 2}O{sub 3})
Kerisit, Sebastien; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parametrized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., {approx}2% of iron(III) sites, whereas the
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A
2014-09-19
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
NASA Astrophysics Data System (ADS)
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
NASA Astrophysics Data System (ADS)
Borzdov, Andrei V.; Borzdov, Vladimir M.; V'yurkov, Vladimir V.
2016-12-01
Ensemble Monte Carlo simulation of electron transport in GaAs/AlAs quantum wire transistor structure is performed. The response of electron drift velocity on the action of harmonic longitudinal electric field is calculated for several values of electric field strength amplitude and gate bias at 77 and 300 K. The periodical electric field has a 1 THz frequency. The nonlinear behaviour of electron drift velocity due to scattering processes is observed.
NASA Astrophysics Data System (ADS)
Arnold, Thorsten; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar
2013-01-01
We investigate magnetic-field-influenced time-dependent transport of Coulomb interacting electrons through a two-dimensional quantum ring in an electromagnetic cavity under nonequilibrium conditions described by a time-convolutionless non-Markovian master equation formalism. We take into account the full electromagnetic interaction of electrons and cavity photons. A bias voltage is applied to semi-infinite leads along the x axis, which are connected to the quantum ring. The magnetic field is tunable to manipulate the time-dependent electron transport coupled to a photon field with either x or y polarization. We find that the lead-system-lead current is strongly suppressed by the y-polarized photon field at magnetic field with two flux quanta due to a degeneracy of the many-body energy spectrum of the mostly occupied states. On the other hand, the lead-system-lead current can be significantly enhanced by the y-polarized field at magnetic field with half-integer flux quanta. Furthermore, the y-polarized photon field perturbs the periodicity of the persistent current with the magnetic field and suppresses the magnitude of the persistent current. The spatial and temporal density distributions reflect the characteristics of the many-body spectrum. The vortex formation in the contact areas to the leads influences the charge circulation in the ring.
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
NASA Astrophysics Data System (ADS)
Hanna, S. R.; Russell, A. G.; Wilkinson, J. G.; Vukovich, J.; Hansen, D. A.
2005-01-01
A Monte Carlo (MC) probabilistic approach is used to estimate uncertainties in the emissions outputs of the Biogenics Emission Inventory System Version 3 (BEIS3) model and subsequent ozone outputs of three Chemical Transport Models (CTMs) due to uncertainties in many key BEIS3 biogenics emissions model parameters and inputs. BEIS3 was developed by the Environmental Protection Agency to estimate emissions of biogenic substances such as isoprene, monoterpenes, oxygenated and other volatile organic compounds (OVOCs), and biogenic nitric oxide (BNO). Uncertainties are investigated for three time periods, 24-29 May, 11-15 July, and 4-9 September 1995, which are ozone episodes that have been extensively studied by others as part of emissions control planning exercises. In the MC approach, 1000 samples are drawn randomly and independently from seventeen BEIS3 parameters and inputs, whose distribution shapes and variances are determined from literature reviews. The 95% confidence range on the calculated uncertainties in isoprene and BNO emissions cover approximately an order of magnitude. On the other hand, the 95% confidence ranges on the calculated uncertainties in monoterpenes and OVOC emissions are much smaller: about ±20%. Correlations are calculated between the 1000 MC samples of pairs of variations in model parameters or inputs and variations in BEIS3 daily emissions estimates for individual grid squares. A few significant correlations are found for some of the assumed model parameters. The MC uncertainties in the CTM-predicted 1- and 8-hour averaged ozone concentrations were studied by drawing 20 random samples from the 1000 sets of BEIS3 outputs and running each CTM (MAQSIP, UAM-V, and URM) 20 times for the three episodes. The estimated total uncertainties of ±15 to 20% are found to be nearly the same for the three CTMs over the three time periods, for 1- and 8-hour averages.
Koo, Brian T; Berard, Philip G; Clancy, Paulette
2015-03-10
Two-dimensional covalent organic frameworks (COFs), with their predictable assembly into ordered porous crystalline materials, tunable composition, and high charge carrier mobility, offer the possibility of creating ordered bulk heterojunction solar cells given a suitable electron-transporting material to fill the pores. The photoconductive (hole-transporting) properties of many COFs have been reported, including the recent creation of a TT-COF/PCBM solar cell by Dogru et al. Although a prototype device has been fabricated, its poor solar efficiency suggests a potential issue with electron transport caused by the interior packing of the fullerenes. Such packing information is absent and cannot be obtained experimentally. In this paper, we use Kinetic Monte Carlo (KMC) simulations to understand the dominant pore-filling mechanisms and packing configurations of C60 molecules in a Pc-PBBA COF that are similar to the COF fabricated experimentally. The KMC simulations thus offer more realistic filling conditions than our previously used Monte Carlo (MC) techniques. We found persistently large separation distances between C60 molecules that are absent in the more tractable MC simulations and which are likely to hinder electron transport significantly. We attribute the looser fullerene packing to the existence of stable motifs with pairwise distances that are mismatched with the underlying adsorption lattice of the COF. We conclude that larger pore COFs may be necessary to optimize electron transport and hence produce higher efficiency devices.
NASA Astrophysics Data System (ADS)
Chishti, Sabiq; Ghosh, Bahniman; Bishnoi, Bhupesh
2015-02-01
We have analyzed the spin transport behaviour of four II-VI semiconductor nanowires by simulating spin polarized transport using a semi-classical Monte-Carlo approach. The different scattering mechanisms considered are acoustic phonon scattering, surface roughness scattering, polar optical phonon scattering, and spin flip scattering. The II-VI materials used in our study are CdS, CdSe, ZnO and ZnS. The spin transport behaviour is first studied by varying the temperature (4-500 K) at a fixed diameter of 10 nm and also by varying the diameter (8-12 nm) at a fixed temperature of 300 K. For II-VI compounds, the dominant mechanism is for spin relaxation; D'yakonovPerel and Elliot Yafet have been actively employed in the first order model to simulate the spin transport. The dependence of the spin relaxation length (SRL) on the diameter and temperature has been analyzed.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
NASA Astrophysics Data System (ADS)
Davidenko, V. D.; Zinchenko, A. S.; Harchenko, I. K.
2016-12-01
Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Kim, S
2015-06-15
Purpose: To quantify the dosimetric variations of misaligned beams for a linear accelerator by using Monte Carlo (MC) simulations. Method and Materials: Misaligned beams of a Varian 21EX Clinac were simulated to estimate the dosimetric effects. All the linac head components for a 6 MV photon beam were implemented in BEAMnrc/EGSnrc system. For incident electron beam parameters, 6 MeV with 0.1 cm full-width-half-max Gaussian beam was used. A phase space file was obtained below the jaw per each misalignment condition of the incident electron beam: (1) The incident electron beams were tilted by 0.5, 1.0 and 1.5 degrees on the x-axis from the central axis. (2) The center of the incident electron beam was off-axially moved toward +x-axis by 0.1, 0.2, and 0.3 cm away from the central axis. Lateral profiles for each misaligned beam condition were acquired at dmax = 1.5 cm and 10 cm depth in a rectangular water phantom. Beam flatness and symmetry were calculated by using the lateral profile data. Results: The lateral profiles were found to be skewed opposite to the angle of the incident beam for the tilted beams. For the displaced beams, similar skewed lateral profiles were obtained with small shifts of penumbra on the +x-axis. The variations of beam flatness were 3.89–11.18% and 4.12–42.57% for the tilted beam and the translated beam, respectively. The beam symmetry was separately found to be 2.95 −9.93% and 2.55–38.06% separately. It was found that the percent increase of the flatness and the symmetry values are approximated 2 to 3% per 0.5 degree tilt or per 1 mm displacement. Conclusion: This study quantified the dosimetric effects of misaligned beams using MC simulations. The results would be useful to understand the magnitude of the dosimetric deviations for the misaligned beams.
NASA Astrophysics Data System (ADS)
Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.
2016-07-01
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D
2016-07-21
This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on
NASA Astrophysics Data System (ADS)
Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.
2013-03-01
Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4π≲η/S<2.5/4π, in the temperature range from 170 to 440 MeV.
Time-dependent photon heat transport through a mesoscopic Josephson device
NASA Astrophysics Data System (ADS)
Lu, Wen-Ting; Zhao, Hong-Kang
2017-02-01
The time-oscillating photon heat current through a dc voltage biased mesoscopic Josephson Junction (MJJ) has been investigated by employing the nonequilibrium Green's function approach. The Landauer-like formula of photon heat current has been derived in both of the Fourier space and its time-oscillating versions, where Coulomb interaction, self inductance, and magnetic flux take effective roles. Nonlinear behaviors are exhibited in the photon heat current due to the quantum nature of MJJ and applied external dc voltage. The magnitude of heat current decreases with increasing the external bias voltage, and subtle oscillation structures appear as the superposition of different photon heat branches. The overall period of heat current with respect to time is not affected by Coulomb interaction, however, the magnitude and phase of it vary considerably by changing the Coulomb interaction.
NASA Astrophysics Data System (ADS)
Qin, Xiao-Ke
2016-12-01
We present the model that two-level system (TLS) nonlocally interacts with one-dimensional coupled-resonator array (CRA). The coherent transport of single-photon inside CRA is well controlled by the state of TLS, which functions as quantum switch. Spin up and spin down correspond to switch on and switch off respectively, or vice versa, which originate from the constructive interference and the destructive interference of two coupling paths. We improve the fidelity of quantum switch by preadjusting the frequency of resonators which couple to TLS. Quantum switch realizes quantum beam splitter when TLS is in the superposition state. The single-photon wave packet would entangle with qubit and propagate to the remote resonators.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
MCNP/X Transport in the Tabular Regime
NASA Astrophysics Data System (ADS)
Hughes, H. Grady
2007-03-01
We review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, we emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. We also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
MCNP/X TRANSPORT IN THE TABULAR REGIME
HUGHES, H. GRADY
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
An Electron/Photon/Relaxation Data Library for MCNP6
Hughes, III, H. Grady
2015-08-07
The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.
NASA Astrophysics Data System (ADS)
Zandbergen, Sander R.; de Dood, Michiel J. A.
2010-01-01
Photonic graphene is a two-dimensional photonic crystal structure that is analogous to graphene. We use 5 mm diameter Al2O3 rods placed on a triangular lattice with a lattice constant a=8mm to create an isolated conical singularity in the photonic band structure at a microwave frequency of 17.6 GHz. At this frequency, the measured transmission of microwaves through a perfectly ordered structure enters a pseudodiffusive regime where the transmission scales inversely with the thickness L of the crystal (L/a≳5). The transmission depends critically on the configuration of the edges: distinct oscillations with an amplitude comparable to the transmission are observed for structures terminated with zigzag edges, while these oscillations are absent for samples with a straight edge configuration.
NASA Astrophysics Data System (ADS)
Lee, Young-Jin; Park, Su-Jin; Lee, Seung-Wan; Kim, Dae-Hong; Kim, Ye-Seul; Kim, Hee-Joung
2013-05-01
The photon counting detector based on cadmium telluride (CdTe) or cadmium zinc telluride (CZT) is a promising imaging modality that provides many benefits compared to conventional scintillation detectors. By using a pinhole collimator with the photon counting detector, we were able to improve both the spatial resolution and the sensitivity. The purpose of this study was to evaluate the photon counting and conventional scintillation detectors in a pinhole single-photon emission computed tomography (SPECT) system. We designed five pinhole SPECT systems of two types: one type with a CdTe photon counting detector and the other with a conventional NaI(Tl) scintillation detector. We conducted simulation studies and evaluated imaging performance. The results demonstrated that the spatial resolution of the CdTe photon counting detector was 0.38 mm, with a sensitivity 1.40 times greater than that of a conventional NaI(Tl) scintillation detector for the same detector thickness. Also, the average scatter fractions of the CdTe photon counting and the conventional NaI(Tl) scintillation detectors were 1.93% and 2.44%, respectively. In conclusion, we successfully evaluated various pinhole SPECT systems for small animal imaging.
Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air.
NASA Astrophysics Data System (ADS)
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43
NASA Astrophysics Data System (ADS)
Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.
2015-11-01
Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
Asymmetric transport of light in arrow-shape photonic crystal waveguides
NASA Astrophysics Data System (ADS)
Rahal, H.; AbdelMalek, F.
2017-03-01
In this paper, we report a design of an asymmetric light propagation based on the Photonic Crystal (PC) structure. The proposed PC is constructed of an arrow-shaped structure integrating different rows of air holes which offer more than 65% transmission in one direction and less than 1% in the opposite direction. The proposed PC is based on the use of two parallel PC waveguides with different air holes in a single platform. The design, optimization and performance of the PC waveguide devices are carried out by employing in-house accurate 2D Finite Difference Time Domain (2D FDTD) computational techniques. Our preliminary numerical simulation results show that complete asymmetric transmission can be achieved in the proposed single structure which would play a significant contribution on realization of high-volume nanoscale photonic integrated circuitry.
The role of plasma evolution and photon transport in optimizing future advanced lithography sources
Sizyuk, Tatyana; Hassanein, Ahmed
2013-08-28
Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 μm in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation
Kuo, H; Tome, W; Yaparpalvi, R; Garg, M; Bodner, W; Kalnicki, S
2015-06-15
Purpose: To validate a determinant based photon transport solver in dose imparted within different transition zone between different medium. Methods: Thickness (.2cm,.5cm, 1cm, 3cm) from various materials (Air - density=0.0012g/cm3, Cork-0.19g/cm3, Lung-0.26g/cm3, Bone-1.85g/cm3, Aluminum (Al)-2.7g/cm3, Titanium (Ti)-4.42g/cm3, Iron (Fe)-8g/cm3) were sandwiched by 10cm solid water. 6MV were used to study the calculation difference between a superposition photon beam model (AAA) and the determinant based Boltzmann photon transport solver (XB) at the upstream (I) and downstream boarder (II) of the medium, within the medium (III), and at far distance downstream away from medium (IV). Calculation was validated with available thickness of Air, Cork, Lung, Al, Ti and Fe. Results are presented as the ratio of the dose at the point with medium perturbation to the same point dose without perturbation. Results: Zone I showed different backscatter enhancement from high-density materials within the 5mm of the upstream border. AAA showed no backscatter at all, XB showed good agreement beyond 1mm upstream (1.18 vs 1.14, 1.09 vs 1.10, and 1.04 vs 1.05 for Fe, Ti, and Fe, respectively). Zone II showed a re-buildup after exiting high-density medium and Air but no build up for density close to water in both of the measurement and XB. AAA yielded the opposite results in Zone II. XB and AAA showed in Zone III very different absorption in high density medium and the Air. XB and measurement had high concordance regarding photon attenuation in Zone IV. AAA showed less agreement especially when the medium was Air or Fe. Conclusion: XB compared well with measurement in regions 1mm away from the interface. Planning using XB should be beneficial for External Beam Planning in situations with large air cavity, very low lung density, compact bone, and any kind of metal implant.
Fast Monte Carlo for radiation therapy: the PEREGRINE Project
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
Matsuda, Nobuyuki; Kuramochi, Eiichi; Takesue, Hiroki; Notomi, Masaya
2014-04-15
We investigate the dispersion and transmission properties of slow-light coupled-resonator optical waveguides that consist of more than 100 ultrahigh-Q photonic crystal cavities. We show that experimental group-delay spectra exhibited good agreement with numerically calculated dispersions obtained with the three-dimensional plane wave expansion method. Furthermore, a statistical analysis of the transmission property indicated that fabrication fluctuations in individual cavities are less relevant than in the localized regime. These behaviors are observed for a chain of up to 400 cavities in a bandwidth of 0.44 THz.
High-speed DC transport of emergent monopoles in spinor photonic fluids.
Terças, H; Solnyshkov, D D; Malpuech, G
2014-07-18
We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles.
Scott, Alison J D; Nahum, Alan E; Fenwick, John D
2009-07-01
The accuracy with which Monte Carlo models of photon beams generated by linear accelerators (linacs) can describe small-field dose distributions depends on the modeled width of the electron beam profile incident on the linac target. It is known that the electron focal spot width affects penumbra and cross-field profiles; here, the authors explore the extent to which source occlusion reduces linac output for smaller fields and larger spot sizes. A BEAMnrc Monte Carlo linac model has been used to investigate the variation in penumbra widths and small-field output factors with electron spot size. A formalism is developed separating head scatter factors into source occlusion and flattening filter factors. Differences between head scatter factors defined in terms of in-air energy fluence, collision kerma, and terma are explored using Monte Carlo calculations. Estimates of changes in kerma-based source occlusion and flattening filter factors with field size and focal spot width are obtained by calculating doses deposited in a narrow 2 mm wide virtual "milliphantom" geometry. The impact of focal spot size on phantom scatter is also explored. Modeled electron spot sizes of 0.4-0.7 mm FWHM generate acceptable matches to measured penumbra widths. However the 0.5 cm field output factor is quite sensitive to electron spot width, the measured output only being matched by calculations for a 0.7 mm spot width. Because the spectra of the unscattered primary (psi(pi)) and head-scattered (psi(sigma)) photon energy fluences differ, miniphantom-based collision kerma measurements do not scale precisely with total in-air energy fluence psi = (psi(pi) + psi(sigma) but with (psi(pi)+ 1.2psi(sigma)). For most field sizes, on-axis collision kerma is independent of the focal spot size; but for a 0.5 cm field size and 1.0 mm spot width, it is reduced by around 7% mostly due to source occlusion. The phantom scatter factor of the 0.5 cm field also shows some spot size dependence, decreasing by
Burns, T.J.
1994-03-01
An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.
NASA Astrophysics Data System (ADS)
Chern, Shyh-Shi; Cárdenas, Alfredo E.; Coalson, Rob D.
2001-10-01
Three-dimensional dynamic Monte Carlo simulations of polymer translocation through a cylindrical hole in a planar slab under the influence of an external driving force are performed. The driving force is intended to emulate the effect of a static electric field applied in an electrolytic solution containing charged monomer particles, as is relevant to the translocation of certain biopolymers through protein channel pores embedded in cell membranes. The time evolution of the probability distribution of the translocation coordinate (the number of monomers that have passed through the pore) is extracted from three-dimensional (3-D) simulations over a range of polymer chain lengths. These distributions are compared to the predictions of a 1-D Smoluchowski equation model of the translocation coordinate dynamics. Good agreement is found, with the effective diffusion constant for the 1-D Smoluchowski model being nearly independent of chain length.
Jha, Abhinav K.; Kupinski, Matthew A.; Masumura, Takahiro; Clarkson, Eric; Maslov, Alexey V.; Barrett, Harrison H.
2014-01-01
We present the implementation, validation, and performance of a Neumann-series approach for simulating light propagation at optical wavelengths in uniform media using the radiative transport equation (RTE). The RTE is solved for an anisotropic-scattering medium in a spherical harmonic basis for a diffuse-optical-imaging setup. The main objectives of this paper are threefold: to present the theory behind the Neumann-series form for the RTE, to design and develop the mathematical methods and the software to implement the Neumann series for a diffuse-optical-imaging setup, and, finally, to perform an exhaustive study of the accuracy, practical limitations, and computational efficiency of the Neumann-series method. Through our results, we demonstrate that the Neumann-series approach can be used to model light propagation in uniform media with small geometries at optical wavelengths. PMID:23201893
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and
Transport properties of disordered photonic crystals around a Dirac-like point.
Wang, Xiao; Jiang, Haitao; Li, Yuan; Yan, Chao; Deng, Fusheng; Sun, Yong; Li, Yunhui; Shi, Yunlong; Chen, Hong
2015-02-23
At the Dirac-like point at the Brillouin zone center, the photonic crystals (PhCs) can mimic a zero-index medium. In the band structure, an additional flat band of longitudinal mode will intersect the Dirac cone. This longitudinal mode can be excited in PhCs with finite sizes at the Dirac-like point. By introducing positional shift in the PhCs, we study the dependence of the longitudinal mode on the disorder. At the Dirac-like point, the transmission peak induced by the longitudinal mode decreases as the random degree increases. However, at a frequency slightly above the Dirac-like point, in which the longitudinal mode is absent, the transmission is insensitive to the disorder because the effective index is still near zero and the effective wavelength in the PhC is very large.
NASA Astrophysics Data System (ADS)
Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.
2012-11-01
Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the
Energy deposition model for I-125 photon radiation in water
NASA Astrophysics Data System (ADS)
Fuss, M. C.; Muñoz, A.; Oller, J. C.; Blanco, F.; Limão-Vieira, P.; Williart, A.; Huerga, C.; Téllez, M.; García, G.
2010-10-01
In this study, an electron-tracking Monte Carlo algorithm developed by us is combined with established photon transport models in order to simulate all primary and secondary particle interactions in water for incident photon radiation. As input parameters for secondary electron interactions, electron scattering cross sections by water molecules and experimental energy loss spectra are used. With this simulation, the resulting energy deposition can be modelled at the molecular level, yielding detailed information about localization and type of single collision events. The experimental emission spectrum of I-125 seeds, as used for radiotherapy of different tumours, was used for studying the energy deposition in water when irradiating with this radionuclide.
Vasdekis, Andreas E.; Scott, E. A.; Roke, Sylvie; Hubbell, J. A.; Psaltis, D.
2013-04-03
Thin membranes, under appropriate boundary conditions, can self-assemble into vesicles, nanoscale bubbles that encapsulate and hence protect or transport molecular payloads. In this paper, we review the types and applications of light fields interacting with vesicles. By encapsulating light-emitting molecules (e.g. dyes, fluorescent proteins, or quantum dots), vesicles can act as particles and imaging agents. Vesicle imaging can take place also under second harmonic generation from vesicle membrane, as well as employing mass spectrometry. Light fields can also be employed to transport vesicles using optical tweezers (photon momentum) or directly pertrurbe the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy).
Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code
Rhoades, W.A.; Childs, R.L.
1982-07-01
DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.
Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.
2014-01-20
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
NASA Astrophysics Data System (ADS)
Giden, I. H.; Yilmaz, D.; Turduev, M.; Kurt, H.; ćolak, E.; Ozbay, E.
2014-01-01
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5-1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz-13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
MCNP{trademark} Monte Carlo: A precis of MCNP
Adams, K.J.
1996-06-01
MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.
NASA Astrophysics Data System (ADS)
Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.
2012-03-01
Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.
NASA Astrophysics Data System (ADS)
Lucci, Luca; Palestri, Pierpaolo; Esseni, David; Selmi, Luca
2005-09-01
In this paper, we present simulations of some of the most relevant transport properties of the inversion layer of ultra-thin film SOI devices with a self-consistent Monte-Carlo transport code for a confined electron gas. We show that size induced quantization not only decreases the low-field mobility (as experimentally found in [Uchida K, Koga J, Ohba R, Numata T, Takagi S. Experimental eidences of quantum-mechanical effects on low-field mobility, gate-channel capacitance and threshold voltage of ultrathin body SOI MOSFETs, IEEE IEDM Tech Dig 2001;633-6; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E. Low field electron and hole mobility of SOI transistors fabricated on ultra-thin silicon films for deep sub-micron technology application. IEEE Trans Electron Dev 2001;48(12):2842-50; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E, An experimental study of mobility enhancement in ultra-thin SOI transistors operated in double-gate mode, IEEE Trans Electron Dev 2003;50(3):802-8. [1-3
Ryu, Hoon
2016-12-01
Dominance of various scattering mechanisms in determination of the carrier mobility is examined for silicon (Si) nanowires of sub-10-nm cross-sections. With a focus on p-type channels, the steady-state hole mobility is studied with multi-subband Monte Carlo simulations to consider quantum effects in nanoscale channels. Electronic structures of gate-all-around nanowires are described with a 6-band k · p model. Channel bandstructures and electrostatics under gate biases are determined self-consistently with Schrödinger-Poisson simulations. Modeling results not only indicate that the hole mobility is severely degraded as channels have smaller cross-sections and are inverted more strongly but also confirm that the surface roughness scattering degrades the mobility more severely than the phonon scattering does. The surface roughness scattering affects carrier transport more strongly in narrower channels, showing ∼90 % dominance in determination of the mobility. At the same channel population, [110] channels suffer from the surface roughness scattering more severely than [100] channels do, due to the stronger corner effect and larger population of carriers residing near channel surfaces. With a sound theoretical framework coupled to the spatial distribution of channel carriers, this work may present a useful guideline for understanding hole transport in ultra-narrow Si nanowires.
Two-photon transport in a waveguide coupled to a cavity in a two-level system
Shi, T.; Sun, C. P.; Fan Shanhui
2011-12-15
We study two-photon effects for a cavity quantum electrodynamics system where a waveguide is coupled to a cavity embedded in a two-level system. The wave function of two-photon scattering is exactly solved by using the Lehmann-Symanzik-Zimmermann reduction. Our results about quantum statistical properties of the outgoing photons explicitly exhibit the photon blockade effects in the strong-coupling regime. These results agree with the observations of recent experiments.
Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Independent pixel and Monte Carlo estimates of stratocumulus albedo
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN
1994-01-01
Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller
Parallel Finite Element Electron-Photon Transport Analysis on 2-D Unstructured Mesh
Drumm, C.R.
1999-01-01
A computer code has been developed to solve the linear Boltzmann transport equation on an unstructured mesh of triangles, from a Pro/E model. An arbitriwy arrangement of distinct material regions is allowed. Energy dependence is handled by solving over an arbitrary number of discrete energy groups. Angular de- pendence is treated by Legendre-polynomial expansion of the particle cross sections and a discrete ordinates treatment of the particle fluence. The resulting linear system is solved in parallel with a preconditioned conjugate-gradients method. The solution method is unique, in that the space-angle dependence is solved si- multaneously, eliminating the need for the usual inner iterations. Electron cross sections are obtained from a Goudsrnit-Saunderson modifed version of the CEPXS code. A one-dimensional version of the code has also been develop@ for testing and development purposes.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio
2010-04-05
FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
NASA Astrophysics Data System (ADS)
Benhenni, Malika; Stachoň, Martin; Gadéa, Florent Xavier; Yousfi, Mohammed; Kalus, René
2016-09-01
A hybrid dynamical method based on the classical treatment of nuclei and the quantum treatment of electrons was used to calculate momentum transfer and dissociation cross-sections for collisions of neon dimer cations with neon atoms. For the inclusion of nuclear quantum effects, a semi-empirical factor was introduced to correct the hybrid momentum transfer cross-sections at low collision energies. Both uncorrected and quantum corrected hybrid cross-sections were used to calculate the {{Ne}}2+ mobility, and longitudinal and transverse characteristic diffusion energies over a wide range of the reduced electric field. Furthermore, the {{Ne}}2+ dissociation rate constant was calculated and compared to measured data. In addition, an approximate inverse method based on an effective isotropic interaction potential was also used to calculate the momentum transfer cross-sections and related transport data.
Lin, Shih-Hsien; Chen, Kao Chin; Lee, Sheng-Yu; Chiu, Nan Tsing; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Chen, Chia-Chieh; Liao, Mei-Hsiu; Yang, Yen Kuang
2015-03-30
One of the consequences of heroin dependency is a huge expenditure on drugs. This underlying economic expense may be a grave burden for heroin users and may lead to criminal behavior, which is a huge cost to society. The neuropsychological mechanism related to heroin purchase remains unclear. Based on recent findings and the established dopamine hypothesis of addiction, we speculated that expenditure on heroin and central dopamine activity may be associated. A total of 21 heroin users were enrolled in this study. The annual expenditure on heroin was assessed, and the availability of the dopamine transporter (DAT) was assessed by single-photon emission computed tomography (SPECT) using [(99m)TC]TRODAT-1. Parametric and nonparametric correlation analyses indicated that annual expenditure on heroin was significantly and negatively correlated with the availability of striatal DAT. After adjustment for potential confounders, the predictive power of DAT availability was significant. Striatal dopamine function may be associated with opioid purchasing behavior among heroin users, and the cycle of spiraling dysfunction in the dopamine reward system could play a role in this association.
NASA Astrophysics Data System (ADS)
Lin, Lin; Zhang, Mei
2015-02-01
The scaling Monte Carlo method and Gaussian model are applied to simulate the transportation of light beam with arbitrary waist radius. Much of the time, Monte Carlo simulation is performed for pencil or cone beam where the initial status of the photon is identical. In practical application, incident light is always focused on the sample to form approximate Gauss distribution on the surface. With alteration of focus position in the sample, the initial status of the photon will not be identical any more. Using the hyperboloid method, the initial reflect angle and coordinates are generated statistically according to the size of Gaussian waist and focus depth. Scaling calculation is performed with baseline data from standard Monte Carlo simulation. The scaling method incorporated with the Gaussian model was tested, and proved effective over a range of scattering coefficients from 20% to 180% relative to the value used in baseline simulation. In most cases, percentage error was less than 10%. The increasing of focus depth will result in larger error of scaled radial reflectance in the region close to the optical axis. In addition to evaluating accuracy of scaling the Monte Carlo method, this study has given implications for inverse Monte Carlo with arbitrary parameters of optical system.
The Monte Carlo code MCSHAPE: Main features and recent developments
NASA Astrophysics Data System (ADS)
Scot, Viviana; Fernandez, Jorge E.
2015-06-01
MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.
Design of a multipurpose mirror system for LCLS-2 photon transport studies (Conference Presentation)
NASA Astrophysics Data System (ADS)
Morton, Daniel S.; Cocco, Daniele; Kelez, Nicholas M.; Srinivasan, Venkat N.; Stefan, Peter M.; Zhang, Lin
2016-09-01
LCLS-2 is a high repetition rate (up to 1 MHz) superconducting FEL and the soft x-ray branch will operate from 0.2 to 1.3 keV. Over this energy range, there is a large variation in beam divergence and therefore, a large variation in the beam footprint on the optics. This poses a significant problem as it creates thermal gradients across the tangential axis of the mirror, which, in turn, creates non-cylindrical deformations that cannot be corrected using a single actuator mechanical bender. To minimize power loss and preserve the wave front, the optics requires sub-nanometer RMS height errors and sub-microradian slope errors. One of the key components of the beam transport in the SXR beamline is the bendable focusing mirror system, operated in a Kirkpatrick-Baez Configuration. For the first time in the Synchrotron or FEL world, the large bending needed to focus the beam will be coupled with a cooling system on the same mirror assembly, since the majority of the FEL power is delivered through every optic leading up to the sample. To test such a concept, we have developed a mirror bender system to be used as a multipurpose optic. The system has been very accurately modeled in FEA. This, along with very good repeatability of the bending mechanism, makes it ideal for use as a metrology tool for calibrating instruments as well as to test the novel cooling/bending concept. The bender design and the tests carried out on it will be presented.
Parsons, C; Parsons, D; Robar, J; Kelly, R
2014-06-15
Purpose: The introduction of the TrueBeam linac platform provides access to an in-air target assembly making it possible to apply novel treatments using multiple target designs. One such novel treatment uses multiple low-Z targets to enhance surface dose replacing the use of synthetic tissue equivalent material (bolus). This treatment technique will decrease the common dosimetric and set up errors prevalent in using physical treatment accessories like bolus. The groundwork for a novel treatment beam used to enhance surface dose to within 80-100% of the dose at dmax by utilizing low-Z (Carbon) targets of various percent CSDA range thickness operated at 2.5–4 MeV used in conjunction with a clinical 6 MV beam is presented herein. Methods: A standard Monte Carlo model of a Varian Clinac accelerator was developed to manufacturers specifications. Simulations were performed using Be, C, AL, and C, as potential low-Z targets, placed in the secondary target position. The results determined C to be the target material of choice. Simulations of 15, 30 and 60% CSDA range C beams were propagated through slab phantoms. The resulting PDDs were weighted and combined with a standard 6 MV treatment beam. Versions of the experimental targets were installed into a 2100C Clinac and the models were validated. Results: Carbon was shown to be the low-Z material of choice for this project. Using combinations of 15, 30, 60% CSDA beams operated at 2.5 and 4 MeV in combination with a standard 6 MV treatment beam the surface dose was shown to be enhanced to within 80–100% the dose at dmax. Conclusion: The modeled low-Z beams were successfully validated using machined versions of the targets. Water phantom measurements and slab phantom simulations show excellent correlation. Patient simulations are now underway to compare the use of bolus with the proposed novel beams. NSERC.
NASA Astrophysics Data System (ADS)
Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho
2015-07-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.
Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.
2004-12-22
The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.
NASA Technical Reports Server (NTRS)
Stephens, D. L. Jr; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.
2002-01-01
Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R2 respectively. c2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities.
NASA Astrophysics Data System (ADS)
Stephens, D. L.; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.
Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71 st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R 2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R 2 respectively.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo
2017-03-01
Dual-energy contrast-enhanced digital mammography (CEDM) has been used to decompose breast images and improve diagnostic accuracy for tumor detection. However, this technique causes an increase of radiation dose and an inaccuracy in material decomposition due to the limitations of conventional X-ray detectors. In this study, we simulated the dual-energy CEDM with an energy-resolved photon-counting detector (ERPCD) for reducing radiation dose and improving the quantitative accuracy of material decomposition images. The ERPCD-based dual-energy CEDM was compared to the conventional dual-energy CEDM in terms of radiation dose and quantitative accuracy. The correlation between radiation dose and image quality was also evaluated for optimizing the ERPCD-based dual-energy CEDM technique. The results showed that the material decomposition errors of the ERPCD-based dual-energy CEDM were 0.56-0.67 times lower than those of the conventional dual-energy CEDM. The imaging performance of the proposed technique was optimized at the radiation dose of 1.09 mGy, which is a half of the MGD for a single view mammogram. It can be concluded that the ERPCD-based dual-energy CEDM with an optimal exposure level is able to improve the quality of material decomposition images as well as reduce radiation dose.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
Carver, D; Kost, S; Pickens, D; Price, R; Stabin, M
2014-06-15
Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width of 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.
Flint, D B; O’Brien, D J; McFadden, C H; Wolfe, T; Krishnan, S; Sawakuchi, G O; Hallacy, T M
2015-06-15
Purpose: To determine the effect of gold-nanoparticles (AuNPs) on energy deposition in water for different irradiation conditions. Methods: TOPAS version B12 Monte Carlo code was used to simulate energy deposition in water from monoenergetic 40 keV and 85 keV photon beams and a 6 MV Varian Clinac photon beam (IAEA phase space file, 10x10 cm{sup 2}, SSD 100 cm). For the 40 and 85 keV beams, monoenergetic 2x2 mm{sup 2} parallel beams were used to irradiate a 30x30x10 µm {sup 3} water mini-phantom located at 1.5 cm depth in a 30x30x50 cm{sup 3} water phantom. 5000 AuNPs of 50 nm diameter were randomly distributed inside the mini-phantom. Energy deposition was scored in the mini-phantom with the AuNPs’ material set to gold and then water. For the 6 MV beam, we created another phase space (PHSP) file on the surface of a 2 mm diameter sphere located at 1.5 cm depth in the water phantom. The PHSP file consisted of all particles entering the sphere including backscattered particles. Simulations were then performed using the new PHSP as the source with the mini-phantom centered in a 2 mm diameter water sphere in vacuum. The g4em-livermore reference list was used with “EMRangeMin/EMRangeMax = 100 eV/7 MeV” and “SetProductionCutLowerEdge = 990 eV” to create the new PHSP, and “SetProductionCutLowerEdge = 100 eV” for the mini-phantom simulations. All other parameters were set as defaults (“finalRange = 100 µm”). Results: The addition of AuNPs resulted in an increase in the mini-phantom energy deposition of (7.5 ± 8.7)%, (1.6 ± 8.2)%, and (−0.6 ± 1.1)% for 40 keV, 85 keV and 6 MV beams respectively. Conclusion: Enhanced energy deposition was seen at low photon energies, but decreased with increasing energy. No enhancement was observed for the 6 MV beam. Future work is required to decrease the statistical uncertainties in the simulations. This research is partially supported from institutional funds from the Center for Radiation Oncology Research, The
Application of advanced Monte Carlo Methods in numerical dosimetry.
Reichelt, U; Henniger, J; Lange, C
2006-01-01
Many tasks in different sectors of dosimetry are very complex and highly sensitive to changes in the radiation field. Often, only the simulation of radiation transport is capable of describing the radiation field completely. Down to sub-cellular dimensions the energy deposition by cascades of secondary electrons is the main pathway for damage induction in matter. A large number of interactions take place until such electrons are slowed down to thermal energies. Also for some problems of photon transport a large number of photon histories need to be processed. Thus the efficient non-analogue Monte Carlo program, AMOS, has been developed for photon and electron transport. Various applications and benchmarks are presented showing its ability. For radiotherapy purposes the radiation field of a brachytherapy source is calculated according to the American Association of Physicists in Medicine Task Group Report 43 (AAPM/TG43). As additional examples, results for the detector efficiency of a high-purity germanium (HPGe) detector and a dose estimation for an X-ray shielding for radiation protection are shown.
Cranmer-Sargison, G; Weston, S; Evans, J A; Sidhu, N P; Thwaites, D I
2012-08-21
The goal of this work was to examine the use of simplified diode detector models within a recently proposed Monte Carlo (MC) based small field dosimetry formalism and to investigate the influence of electron source parameterization has on MC calculated correction factors. BEAMnrc was used to model Varian 6 MV jaw-collimated square field sizes down to 0.5 cm. The IBA stereotactic field diode (SFD), PTW T60016 (shielded) and PTW T60017 (un-shielded) diodes were modelled in DOSRZnrc and isocentric output ratios (OR(fclin)(detMC)) calculated at depths of d = 1.5, 5.0 and 10.0 cm. Simplified detector models were then tested by evaluating the percent difference in (OR(fclin)(detMC)) between the simplified and complete detector models. The influence of active volume dimension on simulated output ratio and response factor was also investigated. The sensitivity of each MC calculated replacement correction factor (k(fclin,fmsr)(Qclin,Qmsr)), as a function of electron FWHM between 0.100 and 0.150 cm and energy between 5.5 and 6.5 MeV, was investigated for the same set of small field sizes using the simplified detector models. The SFD diode can be approximated simply as a silicon chip in water, the T60016 shielded diode can be modelled as a chip in water plus the entire shielding geometry and the T60017 unshielded diode as a chip in water plus the filter plate located upstream. The detector-specific (k(fclin,fmsr)(Qclin,Qmsr)), required to correct measured output ratios using the SFD, T60016 and T60017 diode detectors are insensitive to incident electron energy between 5.5 and 6.5 MeV and spot size variation between FWHM = 0.100 and 0.150 cm. Three general conclusions come out of this work: (1) detector models can be simplified to produce OR(fclin)(detMC) to within 1.0% of those calculated using the complete geometry, where typically not only the silicon chip, but also any high density components close to the chip, such as scattering plates or shielding material is necessary
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Leyva, A.; Pinera, I.; Abreu, Y.; Cruz, C. M.; Montano, L. M.
2008-08-11
During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.
Monte-Carlo Estimation of the Inflight Performance of the GEMS Satellite X-Ray Polarimeter
NASA Technical Reports Server (NTRS)
Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith; Krizmanic, John; Sturner, Steve; Griffiths, Scott; Kaaret, Philip; Marlowe, Hannah
2014-01-01
We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.
Islam, M. Anwarul; Akramuzzaman, M. M.; Zakaria, G. A.
2012-01-01
Manufacturing of miniaturized high activity 192Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60Co HDR sources have been made available with identical dimensions to those of 192Ir sources. 60Co sources have an advantage of longer half life while comparing with 192Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60Co HDR and new microSelectron 192Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSlectron HDR 192Ir v2 and new BEBIG 60Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698×10-8 ± 0.55% U Bq-1 and 3.039×10-7 ± 0.41% U Bq-1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h-1U-1 and 1.097±0.12% cGy h-1U-1, respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using appropriate cut
Kung, M P; Hou, C; Oya, S; Mu, M; Acton, P D; Kung, H F
1999-08-01
Development of selective serotonin transporter (SERT) tracers for single-photon emission tomography (SPET) is important for studying the underlying pharmacology and interaction of specific serotonin reuptake site inhibitors, commonly used antidepressants, at the SERT sites in the human brain. In search of a new tracer for imaging SERT, IDAM (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio]benzyl alcohol) was developed. In vitro characterization of IDAM was carried out with binding studies in cell lines and rat tissue homogenates. In vivo binding of [(125)I]IDAM was evaluated in rats by comparing the uptakes in different brain regions through tissue dissections and ex vivo autoradiography. In vitro binding study showed that IDAM displayed an excellent affinity to SERT sites (K(i)=0.097 nM, using membrane preparations of LLC-PK(1) cells expressing the specific transporter) and showed more than 1000-fold of selectivity for SERT over norepinehrine and dopamine (expressed in the same LLC-PK(1) cells). Scatchard analysis of [(125)I]IDAM binding to frontal cortical membrane homogenates prepared from control or p-chloroamphetamine (PCA)-treated rats was evaluated. As expected, the control membranes showed a K(d) value of 0.25 nM+/-0.05 nM and a B(max) value of 272+/-30 fmol/ mg protein, while the PCA-lesioned membranes displayed a similar K(d), but with a reduced B(max) (20+/-7 fmol/ mg protein). Biodistribution of [(125)I]IDAM (partition coefficient =473; 1-octanol/buffer) in the rat brain showed a high initial uptake (2.44%dose at 2 min after i.v. injection) with the specific binding peaked at 60 min postinjection (hypothalamus-cerebellum/cerebellum =1.75). Ex vivo autoradiographs of rat brain sections (60 min after i.v. injection of [(125)I]IDAM) showed intense labeling in several regions (olfactory tubercle, lateral septal nucleus, hypothalamic and thalamic nuclei, globus pallidus, central gray, superior colliculus, substantia nigra, interpeduncular nucleus, dorsal
Radial Moment Calculations of Coupled Electron-Photon Beams
FRANKE,BRIAN C.; LARSEN,EDWARD W.
2000-07-19
The authors consider the steady-state transport of normally incident pencil beams of radiation in slabs of material. A method has been developed for determining the exact radial moments of 3-D beams of radiation as a function of depth into the slab, by solving systems of 1-D transport equations. They implement these radial moment equations in the ONEBFP discrete ordinates code and simulate energy-dependent, coupled electron-photon beams using CEPXS-generated cross sections. Modified P{sub N} synthetic acceleration is employed to speed up the iterative convergence of the 1-D charged particle calculations. For high-energy photon beams, a hybrid Monte Carlo/discrete ordinates method is examined. They demonstrate the efficiency of the calculations and make comparisons with 3-D Monte Carlo calculations. Thus, by solving 1-D transport equations, they obtain realistic multidimensional information concerning the broadening of electron-photon beams. This information is relevant to fields such as industrial radiography, medical imaging, radiation oncology, particle accelerators, and lasers.
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon
Accelerated GPU based SPECT Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
NASA Astrophysics Data System (ADS)
Mashnik, Stepan G.; Kerby, Leslie M.; Gudima, Konstantin K.; Sierk, Arnold J.; Bull, Jeffrey S.; James, Michael R.
2017-03-01
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N -particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM) used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤7 , in the case of CEM, and A ≤12 , in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Finally, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.
Wan Chan Tseung, H; Ma, J; Beltran, C
2014-06-15
Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil
Monte Carlo applications at Hanford Engineering Development Laboratory
Carter, L.L.; Morford, R.J.; Wilcox, A.D.
1980-03-01
Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent are