Science.gov

Sample records for carlo photon transport

  1. Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport

    NASA Astrophysics Data System (ADS)

    Jacqmin, Dustin J.

    Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990's for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of pre-computed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient. The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.

  2. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  3. Monte Carlo photon transport on vector and parallel superconductors: Final report

    SciTech Connect

    Martin, W.R.; Nowak, P.F.

    1987-09-30

    The vectorized Monte Carlo photon transport code VPHOT has been developed for the Cray-1, Cray-XMP, and Cray-2 computers. The effort in the current project was devoted to multitasking the VPHOT code and implement it on the Cray X-MP and Cray-2 parallel-vector supercomputers, examining the robustness of the vectorized algorithm for changes in the physics of the test problems, and evaluating the efficiency of alternative algorithms such as the ''stack-driven'' algorithm of Bobrowicz for possible incorporation into VPHOT. These tasks are discussed in this paper. 4 refs.

  4. TART97. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    SciTech Connect

    Cullen, D

    1997-11-22

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.

  5. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    SciTech Connect

    Cullen, Dermott E.

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.

  6. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  7. COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport

    NASA Astrophysics Data System (ADS)

    Hayward, Robert M.; Rahnema, Farzad

    2014-06-01

    Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.

  8. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  9. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams

    SciTech Connect

    Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M.

    2010-07-15

    Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  10. penORNL: a parallel monte carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  11. Application of parallel computing to a Monte Carlo code for photon transport in turbid media

    NASA Astrophysics Data System (ADS)

    Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia

    1998-12-01

    Monte Carlo (MC) simulations of photon transport in turbid media suffer a severe limitation represented by very high execution times in all practical cases. This problem could be approached with the technique of parallel computing, which, in principle, is very suitable for MC simulations because they consist in the repeated application of the same calculations to unrelated and superposing events. For the first time in the field of the optical and IR photon transport, we developed a MC parallel code running on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison of several single processor runs (on Alpha AXP DEC 2100) and N-processor runs (on Cray T3E) for the same tissue models shows that the computation time is reduced by a factor of about 5*N, where N is the number of used processors. This means a computation time reduction by a factor ranging from about 102 (as in our case) up to about 5*103 (with the most powerful parallel computers) that could make feasible MC simulations till now impracticable.

  12. Multiple processor version of a Monte Carlo code for photon transport in turbid media

    NASA Astrophysics Data System (ADS)

    Colasanti, Alberto; Guida, Giovanni; Kisslinger, Annamaria; Liuzzi, Raffaele; Quarto, Maria; Riccio, Patrizia; Roberti, Giuseppe; Villani, Fulvia

    2000-10-01

    Although Monte Carlo (MC) simulations represent an accurate and flexible tool to study the photon transport in strongly scattering media with complex geometrical topologies, they are very often infeasible because of their very high computation times. Parallel computing, in principle very suitable for MC approach because it consists in the repeated application of the same calculations to unrelated and superposing events, offers a possible approach to overcome this problem. It was developed an MC multiple processor code for optical and IR photon transport which was run on the parallel processor computer CRAY-T3E (128 DEC Alpha EV5 nodes, 600 Mflops) at CINECA (Bologna, Italy). The comparison between single processor and multiple processor runs for the same tissue models shows that the parallelization reduces the computation time by a factor of about N , where N is the number of used processors. This means a computation time reduction by a factor ranging from about 10 2 (as in our case where 128 processors are available) up to about 10 3 (with the most powerful parallel computers with 1024 processors). This reduction could make feasible MC simulations till now impracticable. The scaling of the execution time of the parallel code, as a function of the values of the main input parameters, is also evaluated.

  13. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    Energy Science and Technology Software Center (ESTSC)

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore » save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less

  14. Monte Carlo simulation of photon transport in a randomly oriented sphere-cylinder scattering medium

    NASA Astrophysics Data System (ADS)

    Linder, T.; Löfqvist, T.

    2011-11-01

    A Monte Carlo simulation tool for simulating photon transport in a randomly oriented sphere-cylinder medium has been developed. The simulated medium represents a paper pulp suspension where the constituents are assumed to be mono-disperse micro-spheres, representing dispersed fiber fragments, and infinitely long, straight, randomly oriented cylinders representing fibers. The diameter of the micro-spheres is considered to be about the order of the wavelength and is described by Mie scattering theory. The fiber diameter is considerably larger than the wavelength and the photon scattering is therefore determined by an analytical solution of Maxwell's equation for scattering at an infinitely long cylinder. By employing a Stokes-Mueller formalism, the software tracks the polarization of the light while propagating through the medium. The effects of varying volume concentrations and sizes of the scattering components on reflection, transmission and polarization of the incident light are investigated. It is shown that not only the size but also the shape of the particles has a big impact on the depolarization.

  15. A method for photon beam Monte Carlo multileaf collimator particle transport

    NASA Astrophysics Data System (ADS)

    Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe

    2002-09-01

    Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.

  16. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes

    SciTech Connect

    Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.

    1993-06-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.

  17. Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport.

    PubMed

    Jia, Xun; Gu, Xuejun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B

    2010-06-01

    Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor. PMID:20463376

  18. Application of discrete ordinates and Monte Carlo methods to transport of photons from environmental sources

    SciTech Connect

    Ryman, J.C.; Eckerman, K.F.; Shultis, J.K.; Faw, R.E.; Dillman, L.T.

    1996-04-01

    Federal Guidance Report No. 12 tabulates dose coefficients for external exposure to photons and electrons emitted by radionuclides distributed in air, water, and soil. Although the dose coefficients of this report are based on previously developed dosimetric methodologies, they are derived from new, detailed calculations of energy and angular distributions of the radiations incident on the body and the transport of these radiations within the body. Effort was devoted to expanding the information available for assessment of radiation dose from radionuclides distributed on or below the surface of the ground. A companion paper (External Exposure to Radionuclides in Air, Water, and Soil) discusses the significance of the new tabulations of coefficients and provides detiled comparisons to previously published values. This paper discusses details of the photon transport calculations.

  19. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    SciTech Connect

    Badal, Andreu; Badano, Aldo

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  20. Users Manual for TART 2002: A Coupled Neutron-Photon 3-D, Combinatorial Geometry Time Dependent Monte Carlo Transport Code

    SciTech Connect

    Cullen, D E

    2003-06-06

    TART 2002 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART 2002 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART 2002 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART 2002 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART 2002 and its data files.

  1. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    SciTech Connect

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  2. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    SciTech Connect

    Cullen, D.E

    2000-11-22

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  3. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  4. Development of parallel monte carlo electron and photon transport (PMCEPT) code III: Applications to medical radiation physics

    NASA Astrophysics Data System (ADS)

    Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun

    2012-05-01

    Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.

  5. Monte Carlo electron-photon transport using GPUs as an accelerator: Results for a water-aluminum-water phantom

    SciTech Connect

    Su, L.; Du, X.; Liu, T.; Xu, X. G.

    2013-07-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)

  6. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    SciTech Connect

    Morgan C. White

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.

  7. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy

    PubMed Central

    Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu

    2016-01-01

    The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070

  8. Simulation of the full-core pin-model by JMCT Monte Carlo neutron-photon transport code

    SciTech Connect

    Li, D.; Li, G.; Zhang, B.; Shu, L.; Shangguan, D.; Ma, Y.; Hu, Z.

    2013-07-01

    Since the large numbers of cells over a million, the tallies over a hundred million and the particle histories over ten billion, the simulation of the full-core pin-by-pin model has become a real challenge for the computers and the computational methods. On the other hand, the basic memory of the model has exceeded the limit of a single CPU, so the spatial domain and data decomposition must be considered. JMCT (J Monte Carlo Transport code) has successful fulfilled the simulation of the full-core pin-by-pin model by the domain decomposition and the nested parallel computation. The k{sub eff} and flux of each cell are obtained. (authors)

  9. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    PubMed

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

  10. Monte Carlo simulations incorporating Mie calculations of light transport in tissue phantoms: Examination of photon sampling volumes for endoscopically compatible fiber optic probes

    SciTech Connect

    Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.

    1996-04-01

    Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.

  11. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  12. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  13. RCPO1 - A Monte Carlo program for solving neutron and photon transport problems in three dimensional geometry with detailed energy description and depletion capability

    SciTech Connect

    Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

    2000-03-01

    The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

  14. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  15. The MCNPX Monte Carlo Radiation Transport Code

    SciTech Connect

    Waters, Laurie S.; McKinney, Gregg W.; Durkee, Joe W.; Fensin, Michael L.; Hendricks, John S.; James, Michael R.; Johns, Russell C.; Pelowitz, Denise B.

    2007-03-19

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4c and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics, particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  16. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  17. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  18. Vectorization of Monte Carlo particle transport

    SciTech Connect

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. . Computer Center; Los Alamos National Lab., NM; Supercomputing Research Center, Bowie, MD )

    1989-01-01

    Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.

  19. Coupled electron-photon radiation transport

    SciTech Connect

    Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.

    2000-01-17

    Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.

  20. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  1. Photon dose calculation incorporating explicit electron transport.

    PubMed

    Yu, C X; Mackie, T R; Wong, J W

    1995-07-01

    Significant advances have been made in recent years to improve photon dose calculation. However, accurate prediction of dose perturbation effects near the interfaces of different media, where charged particle equilibrium is not established, remain unsolved. Furthermore, changes in atomic number, which affect the multiple Coulomb scattering of the secondary electrons, are not accounted for by current photon dose calculation algorithms. As local interface effects are mainly due to the perturbation of secondary electrons, a photon-electron cascade model is proposed which incorporates explicit electron transport in the calculation of the primary photon dose component in heterogeneous media. The primary photon beam is treated as the source of many electron pencil beams. The latter are transported using the Fermi-Eyges theory. The scattered photon dose contribution is calculated with the dose spread array [T.R. Mackie, J.W. Scrimger, and J.J. Battista, Med. Phys. 12, 188-196 (1985)] approach. Comparisons of the calculation with Monte Carlo simulation and TLD measurements show good agreement for positions near the polystyrene-aluminum interfaces. PMID:7565390

  2. Monte Carlo simulation of photon-induced air showers

    NASA Astrophysics Data System (ADS)

    D'Ettorre Piazzoli, B.; di Sciascio, G.

    1994-05-01

    The EPAS code (Electron Photon-induced Air Showers) is a three-dimensional Monte Carlo simulation developed to study the properties of extensive air showers (EAS) generated by the interaction of high energy photons (or electrons) in the atmosphere. Results of the present simulation concern the longitudinal, lateral, temporal and angular distributions of electrons in atmospheric cascades initiated by photons of energies up to 10^3 TeV.

  3. Parallel processing Monte Carlo radiation transport codes

    SciTech Connect

    McKinney, G.W.

    1994-02-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.

  4. An efficient framework for photon Monte Carlo treatment planning.

    PubMed

    Fix, Michael K; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J

    2007-10-01

    Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient. PMID:17881793

  5. Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems

    NASA Astrophysics Data System (ADS)

    Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc

    2015-11-01

    The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases the calculation speed by several magnitude orders.

  6. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    SciTech Connect

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  7. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    SciTech Connect

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    1989-01-01

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

  8. SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)

    SciTech Connect

    West, J.T.; Murphy, J.

    1988-01-01

    SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.

  9. Monte Carlo method for photon heating using temperature-dependent optical properties.

    PubMed

    Slade, Adam Broadbent; Aguilar, Guillermo

    2015-02-01

    The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656

  10. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  11. Monte Carlo radiation transport¶llelism

    SciTech Connect

    Cox, L. J.; Post, S. E.

    2002-01-01

    This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

  12. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  13. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  14. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  15. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  16. Performance of three-photon PET imaging: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kacperski, Krzysztof; Spyrou, Nicholas M.

    2005-12-01

    We have recently introduced the idea of making use of three-photon positron annihilations in positron emission tomography. In this paper, the basic characteristics of the three-gamma imaging in PET are studied by means of Monte Carlo simulations and analytical computations. Two typical configurations of human and small animal scanners are considered. Three-photon imaging requires high-energy resolution detectors. Parameters currently attainable by CdZnTe semiconductor detectors, the technology of choice for the future development of radiation imaging, are assumed. Spatial resolution is calculated as a function of detector energy resolution and size, position in the field of view, scanner size and the energies of the three-gamma annihilation photons. Possible ways to improve the spatial resolution obtained for nominal parameters, 1.5 cm and 3.2 mm FWHM for human and small animal scanners, respectively, are indicated. Counting rates of true and random three-photon events for typical human and small animal scanning configurations are assessed. A simple formula for minimum size of lesions detectable in the three-gamma based images is derived. Depending on the contrast and total number of registered counts, lesions of a few mm size for human and sub mm for small animal scanners can be detected.

  17. Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Plantnick, Steven

    1999-01-01

    The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.

  18. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    NASA Astrophysics Data System (ADS)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.

  19. Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC

    NASA Astrophysics Data System (ADS)

    Kawrakow, Iwan; Fippel, Matthias

    2000-08-01

    Several variance reduction techniques, such as photon splitting, electron history repetition, Russian roulette and the use of quasi-random numbers are investigated and shown to significantly improve the efficiency of the recently developed XVMC Monte Carlo code for photon beams in radiation therapy. It is demonstrated that it is possible to further improve the efficiency by optimizing transport parameters such as electron energy cut-off, maximum electron energy step size, photon energy cut-off and a cut-off for kerma approximation, without loss of calculation accuracy. These methods increase the efficiency by a factor of up to 10 compared with the initial XVMC ray-tracing technique or a factor of 50 to 80 compared with EGS4/PRESTA. Therefore, a common treatment plan (6 MV photons, 10×10 cm2 field size, 5 mm voxel resolution, 1% statistical uncertainty) can be calculated within 7 min using a single CPU 500 MHz personal computer. If the requirement on the statistical uncertainty is relaxed to 2%, the calculation time will be less than 2 min. In addition, a technique is presented which allows for the quantitative comparison of Monte Carlo calculated dose distributions and the separation of systematic and statistical errors. Employing this technique it is shown that XVMC calculations agree with EGSnrc on a sub-per cent level for simulations in the energy and material range of interest for radiation therapy.

  20. Fiber transport of spatially entangled photons

    NASA Astrophysics Data System (ADS)

    Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.

    2012-03-01

    High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.

  1. Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.

    2002-03-01

    Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.

  2. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations

    SciTech Connect

    Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.

    1988-06-01

    At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.

  3. Photon spectra calculation for an Elekta linac beam using experimental scatter measurements and Monte Carlo techniques.

    PubMed

    Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G

    2008-01-01

    The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410

  4. Transport of photons produced by lightning in clouds

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard

    1991-01-01

    The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.

  5. Photonic sensor applications in transportation security

    NASA Astrophysics Data System (ADS)

    Krohn, David A.

    2007-09-01

    There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.

  6. Comparison of Monte Carlo simulations of photon/electron dosimetry in microscale applications.

    PubMed

    Joneja, O P; Negreanu, C; Stepanek, J; Chawl, R

    2003-06-01

    It is important to establish reliable calculational tools to plan and analyse representative microdosimetry experiments in the context of microbeam radiation therapy development. In this paper, an attempt has been made to investigate the suitability of the MCNP4C Monte Carlo code to adequately model photon/electron transport over micron distances. The case of a single cylindrical microbeam of 25-micron diameter incident on a water phantom has been simulated in detail with both MCNP4C and the code PSI-GEANT, for different incident photon energies, to get absorbed dose distributions at various depths, with and without electron transport being considered. In addition, dose distributions calculated for a single microbeam with a photon spectrum representative of the European Synchrotron Radiation Facility (ESRF) have been compared. Finally, a large number of cylindrical microbeams (a total of 2601 beams, placed on a 200-micron square pitch, covering an area of 1 cm2) incident on a water phantom have been considered to study cumulative radial dose distributions at different depths. From these distributions, ratios of peak (within the microbeam) to valley (mid-point along the diagonal connecting two microbeams) dose values have been determined. The various comparisons with PSI-GEANT results have shown that MCNP4C, with its high flexibility in terms of its numerous source and geometry description options, variance reduction methods, detailed error analysis, statistical checks and different tally types, can be a valuable tool for the analysis of microbeam experiments. PMID:12956187

  7. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  8. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy

    2015-09-01

    Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.

  9. A finite element approach for modeling photon transport in tissue.

    PubMed

    Arridge, S R; Schweiger, M; Hiraoka, M; Delpy, D T

    1993-01-01

    The use of optical radiation in medical physics is important in several fields for both treatment and diagnosis. In all cases an analytic and computable model of the propagation of radiation in tissue is essential for a meaningful interpretation of the procedures. A finite element method (FEM) for deriving photon density inside an object, and photon flux at its boundary, assuming that the photon transport model is the diffusion approximation to the radiative transfer equation, is introduced herein. Results from the model for a particular case are given: the calculation of the boundary flux as a function of time resulting from a delta-function input to a two-dimensional circle (equivalent to a line source in an infinite cylinder) with homogeneous scattering and absorption properties. This models the temporal point spread function of interest in near infrared spectroscopy and imaging. The convergence of the FEM results are demonstrated, as the resolution of the mesh is increased, to the analytical expression for the Green's function for this system. The diffusion approximation is very commonly adopted as appropriate for cases which are scattering dominated, i.e., where mu s > mu a, and results from other workers have compared it to alternative models. In this article a high degree of agreement with a Monte Carlo method is demonstrated. The principle advantage of the FE method is its speed. It is in all ways as flexible as Monte Carlo methods and in addition can produce photon density everywhere, as well as flux on the boundary. One disadvantage is that there is no means of deriving individual photon histories. PMID:8497214

  10. Review of Monte Carlo modeling of light transport in tissues.

    PubMed

    Zhu, Caigang; Liu, Quan

    2013-05-01

    A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318

  11. A generic algorithm for Monte Carlo simulation of proton transport

    NASA Astrophysics Data System (ADS)

    Salvat, Francesc

    2013-12-01

    A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.

  12. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  13. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Akhavan, Azadeh; Vosoughi, Naser

    2015-12-01

    Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.

  14. Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Moralles, M.; Guimarães, C. C.; Okuno, E.

    2005-06-01

    Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.

  15. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  16. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    NASA Astrophysics Data System (ADS)

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

  17. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  18. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin

    2015-07-01

    The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)

  19. Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications

    SciTech Connect

    DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.

    1995-12-31

    The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.

  20. Monte Carlo Simulation of Light Transport in Tissue, Beta Version

    Energy Science and Technology Software Center (ESTSC)

    2003-12-09

    Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.

  1. Efficient, Automated Monte Carlo Methods for Radiation Transport

    PubMed Central

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2012-01-01

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  2. Efficient, automated Monte Carlo methods for radiation transport

    SciTech Connect

    Kong Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

  3. Equivalence of four Monte Carlo methods for photon migration in turbid media.

    PubMed

    Sassaroli, Angelo; Martelli, Fabrizio

    2012-10-01

    In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed. PMID:23201658

  4. GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media

    NASA Astrophysics Data System (ADS)

    Doronin, Alex; Meglinski, Igor

    2011-03-01

    Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and the peculiarities of optical radiation propagation therein. Description of photon migration within the random media is based on the radiative transfer that forms a basis of Monte Carlo modelling of light propagation in complex turbid media like biological tissues. In current presentation with a further development of the Monte Carlo technique we introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provide an opportunity to escalate the performance of standard Monte Carlo simulation over 100 times.

  5. GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media

    NASA Astrophysics Data System (ADS)

    Doronin, Alex; Meglinski, Igor

    2010-10-01

    Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and the peculiarities of optical radiation propagation therein. Description of photon migration within the random media is based on the radiative transfer that forms a basis of Monte Carlo modelling of light propagation in complex turbid media like biological tissues. In current presentation with a further development of the Monte Carlo technique we introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provide an opportunity to escalate the performance of standard Monte Carlo simulation over 100 times.

  6. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  7. Monte Carlo Modeling of Photon Interrogation Methods for Characterization of Special Nuclear Material

    SciTech Connect

    Pozzi, Sara A; Downar, Thomas J; Padovani, Enrico; Clarke, Shaun D

    2006-01-01

    This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission

  8. Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy

    SciTech Connect

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-02-15

    Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.

  9. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  10. Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames

    SciTech Connect

    Mehta, R.S.; Haworth, D.C.; Modest, M.F.

    2010-05-15

    A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)

  11. Comparing gold nano-particle enhanced radiotherapy with protons, megavoltage photons and kilovoltage photons: a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan

    2014-12-01

    Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6?MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10?nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10??m away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.

  12. SIMIND Monte Carlo simulation of a single photon emission CT

    PubMed Central

    Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.

    2010-01-01

    In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569

  13. Validation of the Swiss Monte Carlo Plan for a static and dynamic 6 MV photon beam.

    PubMed

    Magaddino, Vera; Manser, Peter; Frei, Daniel; Volken, Werner; Schmidhalter, Daniel; Hirschi, Lukas; Fix, Michael K

    2011-05-01

    Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases. PMID:21239148

  14. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil.

    PubMed

    Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J

    2000-03-01

    The dose rate conversion factors D(CF) (absorbed dose rate in air per unit activity per unit of soil mass, nGy h(-1) per Bq kg(-1)) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D(CF) values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20-30%) for the low energy photons. PMID:10688452

  15. grmonty: A MONTE CARLO CODE FOR RELATIVISTIC RADIATIVE TRANSPORT

    SciTech Connect

    Dolence, Joshua C.; Gammie, Charles F.; Leung, Po Kin; Moscibrodzka, Monika

    2009-10-01

    We describe a Monte Carlo radiative transport code intended for calculating spectra of hot, optically thin plasmas in full general relativity. The version we describe here is designed to model hot accretion flows in the Kerr metric and therefore incorporates synchrotron emission and absorption, and Compton scattering. The code can be readily generalized, however, to account for other radiative processes and an arbitrary spacetime. We describe a suite of test problems, and demonstrate the expected N {sup -1/2} convergence rate, where N is the number of Monte Carlo samples. Finally, we illustrate the capabilities of the code with a model calculation, a spectrum of the slowly accreting black hole Sgr A* based on data provided by a numerical general relativistic MHD model of the accreting plasma.

  16. Analytical band Monte Carlo analysis of electron transport in silicene

    NASA Astrophysics Data System (ADS)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  17. Modeling photon transport in transabdominal fetal oximetry

    NASA Astrophysics Data System (ADS)

    Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton

    2000-07-01

    The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.

  18. Modelling 6 MV photon beams of a stereotactic radiosurgery system for Monte Carlo treatment planning

    NASA Astrophysics Data System (ADS)

    Deng, Jun; Guerrero, Thomas; Ma, C.-M.; Nath, Ravinder

    2004-05-01

    The goal of this work is to build a multiple source model to represent the 6 MV photon beams from a Cyberknife stereotactic radiosurgery system for Monte Carlo treatment planning dose calculations. To achieve this goal, the 6 MV photon beams have been characterized and modelled using the EGS4/BEAM Monte Carlo system. A dual source model has been used to reconstruct the particle phase space at a plane immediately above the secondary collimator. The proposed model consists of two circular planar sources for the primary photons and the scattered photons, respectively. The dose contribution of the contaminant electrons was found to be in the order of 10-3 of the total maximum dose and therefore has been omitted in the source model. Various comparisons have been made to verify the dual source model against the full phase space simulated using the EGS4/BEAM system. The agreement in percent depth dose (PDD) curves and dose profiles between the phase space and the source model was generally within 2%/1 mm for various collimators (5 to 60 mm in diameter) at 80 to 100 cm source-to-surface distances (SSD). Excellent agreement (within 1%/1 mm) was also found between the dose distributions in heterogeneous lung and bone geometry calculated using the original phase space and those calculated using the source model. These results demonstrated the accuracy of the dual source model for Monte Carlo treatment planning dose calculations for the Cyberknife system.

  19. Monte Carlo simulation of secondary radiation exposure from high-energy photon therapy using an anthropomorphic phantom.

    PubMed

    Frankl, Matthias; Macián-Juan, Rafael

    2016-03-01

    The development of intensity-modulated radiotherapy treatments delivering large amounts of monitor units (MUs) recently raised concern about higher risks for secondary malignancies. In this study, optimised combinations of several variance reduction techniques (VRTs) have been implemented in order to achieve a high precision in Monte Carlo (MC) radiation transport simulations and the calculation of in- and out-of-field photon and neutron dose-equivalent distributions in an anthropomorphic phantom using MCNPX, v.2.7. The computer model included a Varian Clinac 2100C treatment head and a high-resolution head phantom. By means of the applied VRTs, a relative uncertainty for the photon dose-equivalent distribution of <1 % in-field and 15 % in average over the rest of the phantom could be obtained. Neutron dose equivalent, caused by photonuclear reactions in the linear accelerator components at photon energies of approximately >8 MeV, has been calculated. Relative uncertainty, calculated for each voxel, could be kept below 5 % in average over all voxels of the phantom. Thus, a very detailed neutron dose distribution could be obtained. The achieved precision now allows a far better estimation of both photon and especially neutron doses out-of-field, where neutrons can become the predominant component of secondary radiation. PMID:26311702

  20. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

  1. Neutron and photon transport in seagoing cargo containers

    SciTech Connect

    Pruet, J.; Descalle, M.-A.; Hall, J.; Pohl, B.; Prussin, S.G.

    2005-05-01

    Factors affecting sensing of small quantities of fissionable material in large seagoing cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of our results to cases not specifically considered here.

  2. A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation

    SciTech Connect

    Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu

    2007-05-01

    A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.

  3. A Monte Carlo study on neutron and electron contamination of an unflattened 18-MV photon beam.

    PubMed

    Mesbahi, Asghar

    2009-01-01

    Recent studies on flattening filter (FF) free beams have shown increased dose rate and less out-of-field dose for unflattened photon beams. On the other hand, changes in contamination electrons and neutron spectra produced through photon (E>10 MV) interactions with linac components have not been completely studied for FF free beams. The objective of this study was to investigate the effect of removing FF on contamination electron and neutron spectra for an 18-MV photon beam using Monte Carlo (MC) method. The 18-MV photon beam of Elekta SL-25 linac was simulated using MCNPX MC code. The photon, electron and neutron spectra at a distance of 100 cm from target and on the central axis of beam were scored for 10 x 10 and 30 x 30 cm(2) fields. Our results showed increase in contamination electron fluence (normalized to photon fluence) up to 1.6 times for FF free beam, which causes more skin dose for patients. Neuron fluence reduction of 54% was observed for unflattened beams. Our study confirmed the previous measurement results, which showed neutron dose reduction for unflattened beams. This feature can lead to less neutron dose for patients treated with unflattened high-energy photon beams. PMID:18760613

  4. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  5. A Monte Carlo simulation of ion transport at finite temperatures

    NASA Astrophysics Data System (ADS)

    Ristivojevic, Zoran; Petrović, Zoran Lj

    2012-06-01

    We have developed a Monte Carlo simulation for ion transport in hot background gases, which is an alternative way of solving the corresponding Boltzmann equation that determines the distribution function of ions. We consider the limit of low ion densities when the distribution function of the background gas remains unchanged due to collision with ions. Special attention has been paid to properly treating the thermal motion of the host gas particles and their influence on ions, which is very important at low electric fields, when the mean ion energy is comparable to the thermal energy of the host gas. We found the conditional probability distribution of gas velocities that correspond to an ion of specific velocity which collides with a gas particle. Also, we have derived exact analytical formulae for piecewise calculation of the collision frequency integrals. We address the cases when the background gas is monocomponent and when it is a mixture of different gases. The techniques described here are required for Monte Carlo simulations of ion transport and for hybrid models of non-equilibrium plasmas. The range of energies where it is necessary to apply the technique has been defined. The results we obtained are in excellent agreement with the existing ones obtained by complementary methods. Having verified our algorithm, we were able to produce calculations for Ar+ ions in Ar and propose them as a new benchmark for thermal effects. The developed method is widely applicable for solving the Boltzmann equation that appears in many different contexts in physics.

  6. New capabilities for Monte Carlo simulation of deuteron transport and secondary products generation

    NASA Astrophysics Data System (ADS)

    Sauvan, P.; Sanz, J.; Ogando, F.

    2010-03-01

    Several important research programs are dedicated to the development of facilities based on deuteron accelerators. In designing these facilities, the definition of a validated computational approach able to simulate deuteron transport and evaluate deuteron interactions and production of secondary particles with acceptable precision is a very important issue. Current Monte Carlo codes, such as MCNPX or PHITS, when applied for deuteron transport calculations use built-in semi-analytical models to describe deuteron interactions. These models are found unreliable in predicting neutron and photon generated by low energy deuterons, typically present in those facilities. We present a new computational tool, resulting from an extension of the MCNPX code, which improve significantly the treatment of problems where any secondary product (neutrons, photons, tritons, etc.) generated by low energy deuterons reactions could play a major role. Firstly, it handles deuteron evaluated data libraries, which allow describing better low deuteron energy interactions. Secondly, it includes a reduction variance technique for production of secondary particles by charged particle-induced nuclear interactions, which allow reducing drastically the computing time needed in transport and nuclear response calculations. Verification of the computational tool is successfully achieved. This tool can be very helpful in addressing design issues such as selection of the dedicated neutron production target and accelerator radioprotection analysis. It can be also helpful to test the deuteron cross-sections under development in the frame of different international nuclear data programs.

  7. A multiple source model for 6 MV photon beam dose calculations using Monte Carlo.

    PubMed

    Fix, M K; Stampanoni, M; Manser, P; Born, E J; Mini, R; Regsegger, P

    2001-05-01

    A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified. PMID:11384062

  8. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.

    2013-12-01

    We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.

  9. A deterministic computational model for the two dimensional electron and photon transport

    NASA Astrophysics Data System (ADS)

    Badavi, Francis F.; Nealy, John E.

    2014-12-01

    A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles θ where -π/2<θ<π/2, and corresponding symmetry is assumed for the range of azimuth angles (0<φ<2π). In the transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.

  10. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  11. Monte Carlo simulation of transport coefficient in organic solar cells

    NASA Astrophysics Data System (ADS)

    Khodakarimi, S.; Hekmatshoar, M. H.; Abbasi, F.

    2016-02-01

    The Monte Carlo simulation was used to study the charge transport mechanism inside a three-dimensional bulk heterojunction (BHJ) polymer solar cell having an additional layer on the BHJ active layer. In the BHJ section, the P3HT:PCBM ratio was varied and its optimum value, in which the diffusion coefficient reached its maximum value, was determined. The diffusion coefficient and short-circuit current ( J sc) were simulated. In low diffusion coefficients, recombination losses limited the performance, which led to a reduction in the short-circuit current. The results showed that adding a P3HT layer having a thickness of 10 nm over the BHJ layer increased hole mobility and enhanced short-circuit current and performance of the solar cells. On the other hand, adding a PCBM layer between the active layer and the cathode side behaved as an efficient hole blocking layer so that electrons could be selectively extracted.

  12. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  13. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation

    PubMed Central

    Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun

    2015-01-01

    The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695

  14. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  15. Dissipationless electron transport in photon-dressed nanostructures.

    PubMed

    Kibis, O V

    2011-09-01

    It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519

  16. On refraction in Monte-Carlo simulations of light transport through biological tissues.

    PubMed

    Kolinko, V G; de Mul, F F; Greve, J; Priezzhev, A V

    1997-05-01

    To obtain reliable results from Monte-Carlo simulations of light scattering experiments, a statistically accurate procedure for positioning the photons after refraction between two different scattering media is necessary. Two statistically equivalent algorithms for calculating the position of the photons immediately after crossing an interface are described and justified. PMID:9246866

  17. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  18. Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.

    PubMed

    Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats

    2015-09-01

    For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter σ(α), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519

  19. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

    NASA Astrophysics Data System (ADS)

    Malladi, Mayank

    Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.

  20. Simple beam models for Monte Carlo photon beam dose calculations in radiotherapy.

    PubMed

    Fix, M K; Keller, H; Regsegger, P; Born, E J

    2000-12-01

    Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4. PMID:11190957

  1. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study

    PubMed Central

    Berg, Eric; Roncali, Emilie; Cherry, Simon R.

    2015-01-01

    Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040

  2. A Fano cavity test for Monte Carlo proton transport algorithms

    SciTech Connect

    Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo

    2014-01-15

    Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE{sub 0})/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.

  3. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.

    PubMed

    Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B

    2014-11-01

    The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and γ-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The γ-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967

  4. Neutron contamination of Varian Clinac iX 10 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, S.; Tursinah, R.; Rhani, M. F.; Soh, R. C. X.; Haryanto, F.; Arif, I.

    2016-03-01

    High energy medical accelerators are commonly used in radiotherapy to increase the effectiveness of treatments. As we know neutrons can be emitted from a medical accelerator if there is an incident of X-ray that hits any of its materials. This issue becomes a point of view of many researchers. The neutron contamination has caused many problems such as image resolution and radiation protection for patients and radio oncologists. This study concerns the simulation of neutron contamination emitted from Varian Clinac iX 10 MV using Monte Carlo code system. As neutron production process is very complex, Monte Carlo simulation with MCNPX code system was carried out to study this contamination. The design of this medical accelerator was modelled based on the actual materials and geometry. The maximum energy of photons and neutron in the scoring plane was 10.5 and 2.239 MeV, respectively. The number and energy of the particles produced depend on the depth and distance from beam axis. From these results, it is pointed out that the neutron produced by linac 10 MV photon beam in a typical treatment is not negligible.

  5. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units.

    PubMed

    Fang, Qianqian; Boas, David A

    2009-10-26

    We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging. PMID:19997242

  6. Robust light transport in non-Hermitian photonic lattices

    NASA Astrophysics Data System (ADS)

    Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della

    2015-08-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.

  7. Robust light transport in non-Hermitian photonic lattices

    PubMed Central

    Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932

  8. Robust light transport in non-Hermitian photonic lattices.

    PubMed

    Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932

  9. Status of the MORSE multigroup Monte Carlo radiation transport code

    SciTech Connect

    Emmett, M.B.

    1993-06-01

    There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

  10. Simulation of Astronomical Images from Optical Survey Telescopes Using a Comprehensive Photon Monte Carlo Approach

    NASA Astrophysics Data System (ADS)

    Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Rasmussen, A. P.; Peng, E.; Ahmad, Z.; Bankert, J.; Chang, C.; Claver, C.; Gilmore, D. K.; Grace, E.; Hannel, M.; Hodge, M.; Lorenz, S.; Lupu, A.; Meert, A.; Nagarajan, S.; Todd, N.; Winans, A.; Young, M.

    2015-05-01

    We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s-1, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.

  11. Variance and efficiency in Monte Carlo transport calculations

    NASA Astrophysics Data System (ADS)

    Lux, Iván

    1980-09-01

    Recent developments in Monte Carlo variance and efficiency analysis are summarized. Sufficient conditions are given under which the variance of a Monte Carlo game is less than that of another. The efficiencies of the ELP method and a game with survival biasing and Russian roulette are treated.

  12. Radiation induced currents in parallel plate ionization chambers: Measurement and Monte Carlo simulation for megavoltage photon and electron beams

    SciTech Connect

    Abdel-Rahman, Wamied; Seuntjens, Jan P.; Verhaegen, Frank; Podgorsak, Ervin B.

    2006-09-15

    Polarity effects in ionization chambers are caused by a radiation induced current, also known as Compton current, which arises as a charge imbalance due to charge deposition in electrodes of ionization chambers. We used a phantom-embedded extrapolation chamber (PEEC) for measurements of Compton current in megavoltage photon and electron beams. Electron contamination of photon beams and photon contamination of electron beams have a negligible effect on the measured Compton current. To allow for a theoretical understanding of the Compton current produced in the PEEC effect we carried out Monte Carlo calculations with a modified user code, the COMPTON/EGSnrc. The Monte Carlo calculated COMPTON currents agree well with measured data for both photon and electron beams; the calculated polarity correction factors, on the other hand, do not agree with measurement results. The conclusions reached for the PEEC can be extended to parallel-plate ionization chambers in general.

  13. Radiation induced currents in parallel plate ionization chambers: measurement and Monte Carlo simulation for megavoltage photon and electron beams.

    PubMed

    Abdel-Rahman, Wamied; Seuntjens, Jan P; Verhaegen, Frank; Podgorsak, Ervin B

    2006-09-01

    Polarity effects in ionization chambers are caused by a radiation induced current, also known as Compton current, which arises as a charge imbalance due to charge deposition in electrodes of ionization chambers. We used a phantom-embedded extrapolation chamber (PEEC) for measurements of Compton current in megavoltage photon and electron beams. Electron contamination of photon beams and photon contamination of electron beams have a negligible effect on the measured Compton current. To allow for a theoretical understanding of the Compton current produced in the PEEC effect we carried out Monte Carlo calculations with a modified user code, the COMPTON/ EGSnrc. The Monte Carlo calculated COMPTON currents agree well with measured data for both photon and electron beams; the calculated polarity correction factors, on the other hand, do not agree with measurement results. The conclusions reached for the PEEC can be extended to parallel-plate ionization chambers in general. PMID:17022201

  14. A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport

    SciTech Connect

    Wollaeger, Ryan T.; Densmore, Jeffery D.

    2012-06-19

    Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.

  15. An automated variance reduction method for global Monte Carlo neutral particle transport problems

    NASA Astrophysics Data System (ADS)

    Cooper, Marc Andrew

    A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

  16. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.

    PubMed

    Demol, Benjamin; Viard, Romain; Reynaert, Nick

    2015-01-01

    The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using only hydrogen content of tissues, a conclusion that might impact MRI dose calculation, but can also help selecting the optimal tissue substitutes when calibrating MVCT devices. PMID:26699320

  17. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SPn), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  18. The difference of scoring dose to water or tissues in Monte Carlo dose calculations for low energy brachytherapy photon sources

    SciTech Connect

    Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank

    2011-03-15

    Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.

  19. Energy-loss straggling algorithms for Monte Carlo electron transport.

    PubMed

    Chibani, Omar

    2002-10-01

    A new method is presented for the modeling of the electron (positron) energy-loss straggling in Monte Carlo transport simulations. First, the Vavilov energy-loss distribution is calculated for electrons and positrons using the Møller and Bhabha collision cross-sections, respectively. The maximum energy transfer in a single collision (E(S)) is considered as variable. Binding effects from low-energy collisions are modeled using the Blunck and Westphal model. Secondly, new algorithms are developed to fit the Vavilov distribution. These algorithms are based on the first three moments of the energy-loss distribution. They are suitable for rapid random sampling of the energy loss. The new algorithms are validated against the Vavilov distribution for electrons and positrons, water and lead, kinetic energy E0 of 0.1, 1, and 10 MeV and several values of E(S) (10, 50, 100, and 200 keV). The developed algorithms are incorporated in a new version of the GEPTS Monte Carlo code called GEPTS(III). Collisions involving energy transfers larger than E(S) are simulated individually and the energy loss due to soft collisions (energy transfers less than E(S)) is sampled using the new algorithms. The straggling effect is therefore taken into account whatever the chosen E(S) value. GEPTS(III) and EGSnrc are used for the calculation of (1) electron dose distributions in water and (2) energy spectra for electrons passing through water and tungsten slabs. Electron beams of 1, 2, 5, 10, and 20 MeV along with varying E(S) values are considered. Electron dose distributions in water are rather insensitive to the soft collision straggling. The use of the new algorithms results in a slight gain in computation time when relatively large E(S) values are used (e.g., E(S) = 1 MeV for 10 MeV electrons). However, the calculation of electron energy spectra is very sensitive to the soft collision straggling. GEPTS(III) (E(S) = 200 keV) is about 5 and 11 times faster than EGSnrc (E(S) = 1 keV) for the case of 2 and 20 MeV electrons passing through 0.025 and 0.25 cm water slabs, respectively. Contrary to EGSnrc, GEPTS(III) accounts for the energy-spectrum broadening due to the binding effects. The resulting differences between the two codes are significant for 5 and 10 MeV electrons passing through a 0.01 cm tungsten slab. Gains in GEPTS(III) computation times (approximately a factor 5) are also observed for tungsten. In short, GEPTS(III) provides significant advantages (rapidity and accuracy) for electron transport simulations, especially those dealing with energy-spectrum calculations, as encountered in clinical electron beam modeling studies. In other respects, the developed approach is more suitable than class-II codes for the use of accurate electron cross sections (numerical data) at low energy (<100 keV). PMID:12408312

  20. Transport properties of pseudospin-1 photons (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Chan, Che Ting; Fang, Anan; Zhang, Zhao-Qing; Louie, Steven G.

    2015-09-01

    Pseudospin is of central importance in governing many unusual transport properties of graphene and other artificial systems which have pseudospins of 1/2. These unconventional transport properties are manifested in phenomena such as Klein tunneling, and collimation of electron beams in one-dimensional external potentials. Here we show that in certain photonic crystals (PCs) exhibiting conical dispersions at the center of Brillouin zone, the eigenstates near the "Dirac-like point" can be described by an effective spin-orbit Hamiltonian with a pseudospin of 1. This effective Hamiltonian describes within a unified framework the wave propagations in both positive and negative refractive index media which correspond to the upper and lower conical bands respectively. Different from a Berry phase of π for the Dirac cone of pseudospin-1/2 systems, the Berry phase for the Dirac-like cone turns out to be zero from this pseudospin-1 Hamiltonian. In addition, we found that a change of length scale of the PC can shift the Dirac-like cone rigidly up or down in frequency with its group velocity unchanged, hence mimicking a gate voltage in graphene and allowing for a simple mechanism to control the flow of pseudospin-1 photons. As a photonic analogue of electron potential, the length-scale induced Dirac-like point shift is effectively a photonic potential within the effective pseudospin-1 Hamiltonian description. At the interface of two different potentials, the 3-component spinor gives rise to distinct boundary conditions which do not require each component of the wave function to be continuous, leading to new wave transport behaviors as shown in Klein tunneling and supercollimation. For examples, the Klein tunneling of pseudospin-1 photons is much less anisotropic with reference to the incident angle than that of pseudospin-1/2 electrons, and collimation can be more robust with pseudospin-1 than pseudospin-1/2. The special wave transport properties of pseudospin-1 photons, coupled with the discovery that the effective photonic "potential" can be varied by a simple length-scale change, may offer new ways to control photon transport. We will also explore the difference between pseudospin-1 photons and pseudospin-1/2 particles when they encounter disorder.

  1. Photon transport enhanced by transverse Anderson localization in disordered superlattices

    NASA Astrophysics Data System (ADS)

    Hsieh, P.; Chung, C.; McMillan, J. F.; Tsai, M.; Lu, M.; Panoiu, N. C.; Wong, C. W.

    2015-03-01

    Controlling the flow of light at subwavelength scales provides access to functionalities such as negative or zero index of refraction, transformation optics, cloaking, metamaterials and slow light, but diffraction effects severely restrict our ability to control light on such scales. Here we report the photon transport and collimation enhanced by transverse Anderson localization in chip-scale dispersion-engineered anisotropic media. We demonstrate a photonic crystal superlattice structure in which diffraction is nearly completely arrested by cascaded resonant tunnelling through transverse guided resonances. By modifying the geometry of more than 4,000 scatterers in the superlattices we add structural disorder controllably and uncover the mechanism of disorder-induced transverse localization. Arrested spatial divergence is captured in the power-law scaling, along with exponential asymmetric mode profiles and enhanced collimation bandwidths for increasing disorder. With increasing disorder, we observe the crossover from cascaded guided resonances into the transverse localization regime, beyond both the ballistic and diffusive transport of photons.

  2. A Monte Carlo simulation for predicting photon return from sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Kibblewhite, Edward; Jin, Kai; Xue, Suijian; Shen, Zhixia; Bo, Yong; Zuo, Junwei; Wei, Kai

    2015-10-01

    Sodium laser guide star is an ideal source for astronomical adaptive optics system correcting wave-front aberration caused by atmospheric turbulence. However, the cost and difficulties to manufacture a compact high quality sodium laser with power higher than 20W is not a guarantee that the laser will provide a bright enough laser guide star due to the physics of sodium atom in the atmosphere. It would be helpful if a prediction tool could provide the estimation of photon generating performance for arbitrary laser output formats, before an actual laser were designed. Based on rate equation, we developed a Monte Carlo simulation software that could be used to predict sodium laser guide star generating performance for arbitrary laser formats. In this paper, we will describe the model of our simulation, its implementation and present comparison results with field test data.

  3. Validation of Monte Carlo calculated surface doses for megavoltage photon beams

    SciTech Connect

    Abdel-Rahman, Wamied; Seuntjens, Jan P.; Verhaegen, Frank; Deblois, Francois; Podgorsak, Ervin B.

    2005-01-01

    Recent work has shown that there is significant uncertainty in measuring build-up doses in megavoltage photon beams especially at high energies. In this present investigation we used a phantom-embedded extrapolation chamber (PEEC) made of Solid Water{sup TM} to validate Monte Carlo (MC)-calculated doses in the dose build-up region for 6 and 18 MV x-ray beams. The study showed that the percentage depth ionizations (PDIs) obtained from measurements are higher than the percentage depth doses (PDDs) obtained with Monte Carlo techniques. To validate the MC-calculated PDDs, the design of the PEEC was incorporated into the simulations. While the MC-calculated and measured PDIs in the dose build-up region agree with one another for the 6 MV beam, a non-negligible difference is observed for the 18 MV x-ray beam. A number of experiments and theoretical studies of various possible effects that could be the source of this discrepancy were performed. The contribution of contaminating neutrons and protons to the build-up dose region in the 18 MV x-ray beam is negligible. Moreover, the MC calculations using the XCOM photon cross-section database and the NIST bremsstrahlung differential cross section do not explain the discrepancy between the MC calculations and measurement in the dose build-up region for the 18 MV. A simple incorporation of triplet production events into the MC dose calculation increases the calculated doses in the build-up region but does not fully account for the discrepancy between measurement and calculations for the 18 MV x-ray beam.

  4. Validation of Monte Carlo calculated surface doses for megavoltage photon beams.

    PubMed

    Abdel-Rahman, Wamied; Seuntjens, Jan P; Verhaegen, Frank; Deblois, François; Podgorsak, Ervin B

    2005-01-01

    Recent work has shown that there is significant uncertainty in measuring build-up doses in mega-voltage photon beams especially at high energies. In this present investigation we used a phantom-embedded extrapolation chamber (PEEC) made of Solid Water to validate Monte Carlo (MC)-calculated doses in the dose build-up region for 6 and 18 MV x-ray beams. The study showed that the percentage depth ionizations (PDIs) obtained from measurements are higher than the percentage depth doses (PDDs) obtained with Monte Carlo techniques. To validate the MC-calculated PDDs, the design of the PEEC was incorporated into the simulations. While the MC-calculated and measured PDIs in the dose build-up region agree with one another for the 6 MV beam, a non-negligible difference is observed for the 18 MV x-ray beam. A number of experiments and theoretical studies of various possible effects that could be the source of this discrepancy were performed. The contribution of contaminating neutrons and protons to the build-up dose region in the 18 MV x-ray beam is negligible. Moreover, the MC calculations using the XCOM photon cross-section database and the NIST bremsstrahlung differential cross section do not explain the discrepancy between the MC calculations and measurement in the dose build-up region for the 18 MV. A simple incorporation of triplet production events into the MC dose calculation increases the calculated doses in the build-up region but does not fully account for the discrepancy between measurement and calculations for the 18 MV x-ray beam. PMID:15719980

  5. Evaluation of Electron Contamination in Cancer Treatment with Megavoltage Photon Beams: Monte Carlo Study

    PubMed Central

    Seif, F.; Bayatiani, M. R.

    2015-01-01

    Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its company’s instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (5×5 to 40×40 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 5×5 cm2 to 27% for 40×40 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409

  6. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

    SciTech Connect

    Miller, S.G.

    1988-08-01

    Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

  7. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  8. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.

    1982-01-01

    Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.

  9. Detailed calculation of inner-shell impact ionization to use in photon transport codes

    NASA Astrophysics Data System (ADS)

    Fernandez, Jorge E.; Scot, Viviana; Verardi, Luca; Salvat, Francesc

    2014-02-01

    Secondary electrons can modify the intensity of the XRF characteristic lines by means of a mechanism known as inner-shell impact ionization (ISII). The ad-hoc code KERNEL (which calls the PENELOPE package) has been used to characterize the electron correction in terms of angular, spatial and energy distributions. It is demonstrated that the angular distribution of the characteristic photons due to ISII can be safely considered as isotropic, and that the source of photons from electron interactions is well represented as a point source. The energy dependence of the correction is described using an analytical model in the energy range 1-150 keV, for all the emission lines (K, L and M) of the elements with atomic numbers Z=11-92. It is introduced a new photon kernel comprising the correction due to ISII, suitable to be adopted in photon transport codes (deterministic or Monte Carlo) with a minimal effort. The impact of the correction is discussed for the most intense K (Kα1,Kα2,Kβ1) and L (Lα1,Lα2) lines.

  10. Determination of peripheral underdosage at the lung-tumor interface using Monte Carlo radiation transport calculations

    SciTech Connect

    Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick

    2012-04-01

    Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.

  11. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

    NASA Astrophysics Data System (ADS)

    Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

    2014-03-01

    Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

  12. Utilization of a Photon Transport Code to Investigate Radiation Therapy Treatment Planning Quantities and Techniques.

    NASA Astrophysics Data System (ADS)

    Palta, Jatinder Raj

    A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.

  13. Comparative analysis of discrete and continuous absorption weighting estimators used in Monte Carlo simulations of radiative transport in turbid media

    PubMed Central

    Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan

    2014-01-01

    We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029

  14. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC

    NASA Astrophysics Data System (ADS)

    Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

    2011-02-01

    In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.

  15. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC.

    PubMed

    Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

    2011-02-01

    In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found. PMID:21242628

  16. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    SciTech Connect

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).

  17. LDRD project 151362 : low energy electron-photon transport.

    SciTech Connect

    Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James

    2013-09-01

    At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.

  18. Few-photon transport in many-body photonic systems: A scattering approach

    NASA Astrophysics Data System (ADS)

    Lee, Changhyoup; Noh, Changsuk; Schetakis, Nikolaos; Angelakis, Dimitris G.

    2015-12-01

    We study the quantum transport of multiphoton Fock states in one-dimensional Bose-Hubbard lattices implemented in QED cavity arrays (QCAs). We propose an optical scheme to probe the underlying many-body states of the system by analyzing the properties of the transmitted light using scattering theory. To this end, we employ the Lippmann-Schwinger formalism within which an analytical form of the scattering matrix can be found. The latter is evaluated explicitly for the two-particle, two-site case which we use to study the resonance properties of two-photon scattering, as well as the scattering probabilities and the second-order intensity correlations of the transmitted light. The results indicate that the underlying structure of the many-body states of the model in question can be directly inferred from the physical properties of the transported photons in its QCA realization. We find that a fully resonant two-photon scattering scenario allows a faithful characterization of the underlying many-body states, unlike in the coherent driving scenario usually employed in quantum master-equation treatments. The effects of losses in the cavities, as well as the incoming photons' pulse shapes and initial correlations, are studied and analyzed. Our method is general and can be applied to probe the structure of any many-body bosonic model amenable to a QCA implementation, including the Jaynes-Cummings-Hubbard model, the extended Bose-Hubbard model, as well as a whole range of spin models.

  19. Monte Carlo linear accelerator simulation of megavoltage photon beams: Independent determination of initial beam parameters

    SciTech Connect

    Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore

    2012-01-15

    Purpose: To individually benchmark the incident electron parameters in a Monte Carlo model of an Elekta linear accelerator operating at 6 and 15 MV. The main objective is to establish a simplified but still precise benchmarking procedure that allows accurate dose calculations of advanced treatment techniques. Methods: The EGSnrc Monte Carlo user codes BEAMnrc and DOSXYZnrc are used for photon beam simulations and dose calculations, respectively. A 5 x 5 cm{sup 2} field is used to determine both the incident electron energy and the electron radial intensity. First, the electron energy is adjusted to match the calculated depth dose to the measured one. Second, the electron radial intensity is adjusted to make the calculated dose profile in the penumbrae region match the penumbrae measured by GafChromic EBT film. Finally, the mean angular spread of the incident electron beam is determined by matching calculated and measured cross-field profiles of large fields. The beam parameters are verified for various field sizes and shapes. Results: The penumbrae measurements revealed a non-circular electron radial intensity distribution for the 6 MV beam, while a circular electron radial intensity distribution could best describe the 15 MV beam. These electron radial intensity distributions, given as the standard deviation of a Gaussian distribution, were found to be 0.25 mm (in-plane) and 1.0 mm (cross-plane) for the 6 MV beam and 0.5 mm (both in-plane and cross-plane) for the 15 MV beam. Introducing a small mean angular spread of the incident electron beam has a considerable impact on the lateral dose profiles of large fields. The mean angular spread was found to be 0.7 deg. and 0.5 deg. for the 6 and 15 MV beams, respectively. Conclusions: The incident electron beam parameters in a Monte Carlo model of a linear accelerator could be precisely and independently determined by the benchmarking procedure proposed. As the dose distribution in the penumbra region is insensitive to moderate changes in electron energy and angular spread, accurate penumbra measurements is feasible for benchmarking the electron radial intensity distribution. This parameter is particularly important for accurate dosimetry of mlc-shaped fields and small fields.

  20. PREMAR-3F Momte Carlo simulations of fluorescence and Cherenkov photon flux radiative transfer

    NASA Astrophysics Data System (ADS)

    Kostadinov, I.; Pagnutti, S.; Giovanelli, G.

    Different approaches are deployed for discovering of relativistic particles and further revealing of their physical properties. Recently a noticeable interest is devoted to detecting of Extreme Energy Cosmic Rays (EECRs with E >3x1019 eV) and the High Energy Cosmic Neutrino. The characteristic UV fluorescence emissions and Cherenkov radiation produced as a result of particles-atmosphere interactions provide opportunity to retrieve EECRs physical parameters. A number of ground based facilities are already in operative mode for measuring of abovementioned optical phenomena. However, these facilities are limited to the probed atmospheric volume in comparison to similar scientific equipment proposed to be installed on space platforms. The correct interpretation of these down-looking space-borne measurements requires not only assessment of all factors, e.g. blue jets, airglow, lightning, city and night-sky light, etc., appearing short-lived or continuous atmospheric spectral noise, but to estimate incoming to the instrumental input fluorescence and Cherenkov photon flux. For this purpose PREMAR-3F code, performing Monte-Carlo simulations of the radiative transfer, proves to be very useful. In the code, the observed area can be splitted into sub-areas each with its specific atmospheric model, landscape, sea surface roughness, clouds cover, albedo, spectral noise sources, etc. It provides possibility to evaluate spectral SNR under large variety of scenarios which appear within instrumental FoV. A very useful tool of PREMAR-3F code is the possibility, starting from a reference environment, to evaluate in a unique run, differential effects due to small perturbations, which can be introduced upon given physical parameters. We show and discuss simulations of UV fluorescence and Cherenkov 300-400nm photon flux under single and multi scattering approaches have been performed in attempt to evaluate the detection limit of Extreme Universe Space Observatory (EUSO) and to estimate the effects of the known atmospheric optical phenomena under different scenarios.

  1. Monte Carlo modelling of positron transport in real world applications

    NASA Astrophysics Data System (ADS)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  2. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    SciTech Connect

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  3. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  4. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901

  5. Monte Carlo simulation and measurements of clinical photon beams using LiF:Mg,Cu,P+PTFE.

    PubMed

    Azorín-Vega, C; Rivera-Montalvo, T; Azorín-Nieto, J; Villaseñor-Navarro, L; Luján-Castilla, P; Vega-Carrillo, H

    2010-01-01

    Thermoluminescent response of LiF:Mg,Cu,P+PTFE under clinical photon irradiation was obtained. Thermoluminescent dosimeters (TLDs) were irradiated for determining entrance surface dose (ESD) in a solid water phantom when using standard clinical adult treatment protocols. A Monte Carlo simulation of photon interaction with matter was performed and absorbed dose determined. ESD calculated by MCNPX code was greater than those determined by direct measurements in phantom. The results obtained open the possibility for using this material as a TLDs in medical accelerators. PMID:20093037

  6. The FERMI-Elettra FEL Photon Transport System

    SciTech Connect

    Zangrando, M.; Cudin, I.; Fava, C.; Godnig, R.; Kiskinova, M.; Masciovecchio, C.; Parmigiani, F.; Rumiz, L.; Svetina, C.; Turchet, A.; Cocco, D.

    2010-06-23

    The FERMI-Elettra free electron laser (FEL) user facility is under construction at Sincrotrone Trieste (Italy), and it will be operative in late 2010. It is based on a seeded scheme providing an almost perfect transform-limited and fully spatially coherent photon beam. FERMI-Elettra will cover the wavelength range 100 to 3 nm with the fundamental harmonics, and down to 1 nm with higher harmonics. We present the layout of the photon beam transport system that includes: the first common part providing on-line and shot-to-shot beam diagnostics, called PADReS (Photon Analysis Delivery and Reduction System), and 3 independent beamlines feeding the experimental stations. Particular emphasis is given to the solutions adopted to preserve the wavefront, and to avoid damage on the different optical elements. Peculiar FEL devices, not common in the Synchrotron Radiation facilities, are described in more detail, e.g. the online photon energy spectrometer measuring shot-by-shot the spectrum of the emitted radiation, the beam splitting and delay line system dedicated to cross/auto correlation and pump-probe experiments, and the wavefront preserving active optics adapting the shape and size of the focused spot to meet the needs of the different experiments.

  7. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media.

    PubMed

    Glaser, Adam K; Kanick, Stephen C; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W

    2013-05-01

    We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle's traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

  8. A GAMOS plug-in for GEANT4 based Monte Carlo simulation of radiation-induced light transport in biological media

    PubMed Central

    Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.

    2013-01-01

    We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particles traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

  9. Monte Carlo simulation of photon migration in turbid random media based on the object-oriented programming paradigm

    NASA Astrophysics Data System (ADS)

    Doronin, Alex; Meglinski, Igor

    2011-03-01

    The advantages of using method Monte Carlo for simulation of radiative transfer in complex turbid random media like biological tissues are well recognized. However, in most practical applications the wave nature of probing optical radiation is ignored, and its propagation is considered in terms of neutral particles, so-called photon packets. Nevertheless, when the interference, polarization or coherent effects of scattering of optical/laser radiation constitute the fundamental principle of a particular optical technique the wave nature of optical radiation must be considered and taken into account. In current report we present the state-of-the art and the development prospects for application of method Monte Carlo to the point of taking into account the wave properties of optical radiation. We also introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provides an opportunity to escalate the performance of standard Monte Carlo simulation up to 100 times.

  10. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report

    SciTech Connect

    Filippone, W.L.; Baker, R.S.

    1990-12-31

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.

  11. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  12. Investigation of a probe design for facilitating the uses of the standard photon diffusion equation at short source-detector separations: Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Tseng, Sheng-Hao; Hayakawa, Carole; Spanier, Jerome; Durkin, Anthony J.

    2009-09-01

    We design a special diffusing probe to investigate the optical properties of human skin in vivo. The special geometry of the probe enables a modified two-layer (MTL) diffusion model to precisely describe the photon transport even when the source-detector separation is shorter than 3 mean free paths. We provide a frequency domain comparison between the Monte Carlo model and the diffusion model in both the MTL geometry and conventional semiinfinite geometry. We show that using the Monte Carlo model as a benchmark method, the MTL diffusion theory performs better than the diffusion theory in the semiinfinite geometry. In addition, we carry out Monte Carlo simulations with the goal of investigating the dependence of the interrogation depth of this probe on several parameters including source-detector separation, sample optical properties, and properties of the diffusing high-scattering layer. From the simulations, we find that the optical properties of samples modulate the interrogation volume greatly, and the source-detector separation and the thickness of the diffusing layer are the two dominant probe parameters that impact the interrogation volume. Our simulation results provide design guidelines for a MTL geometry probe.

  13. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  14. Monte Carlo study of photon beams from medical linear accelerators: Optimization, benchmark and spectra

    NASA Astrophysics Data System (ADS)

    Sheikh-Bagheri, Daryoush

    1999-12-01

    BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 σ level and for all at the 2 σ level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 σ level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.

  15. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies

    PubMed Central

    Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos

    2016-01-01

    A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232

  16. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies

    NASA Astrophysics Data System (ADS)

    Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos

    2016-02-01

    A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed.

  17. Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies.

    PubMed

    Makri, Eleana; Smith, Kyle; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos

    2016-01-01

    A localized mode in a photonic layered structure can develop nodal points (nodal planes), where the oscillating electric field is negligible. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transmission. Here we demonstrate that if the nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the metallic layer drastically suppresses the localized mode along with the resonant transmission. This renders the layered structure highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical and microwave limiting and switching are discussed. PMID:26903232

  18. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  19. Direct calibration in megavoltage photon beams using Monte Carlo conversion factor: validation and clinical implications

    NASA Astrophysics Data System (ADS)

    Wright, Tracy; Lye, Jessica E.; Ramanathan, Ganesan; Harty, Peter D.; Oliver, Chris; Webb, David V.; Butler, Duncan J.

    2015-01-01

    The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6 MV, 0.51% at 10 MV and 0.46% for the 18 MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18 MV beams, respectively, with all beams within 1σ of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method.

  20. Radiative transport in fluorescence-enhanced frequency domain photon migration.

    PubMed

    Rasmussen, John C; Joshi, Amit; Pan, Tianshu; Wareing, Todd; McGhee, John; Sevick-Muraca, Eva M

    2006-12-01

    Small animal optical tomography has significant, but potential application for streamlining drug discovery and pre-clinical investigation of drug candidates. However, accurate modeling of photon propagation in small animal volumes is critical to quantitatively obtain accurate tomographic images. Herein we present solutions from a robust fluorescence-enhanced, frequency domain radiative transport equation (RTE) solver with unique attributes that facilitate its deployment within tomographic algorithms. Specifically, the coupled equations describing time-dependent excitation and emission light transport are solved using discrete ordinates (SN) angular differencing along with linear discontinuous finite-element spatial differencing on unstructured tetrahedral grids. Source iteration in conjunction with diffusion synthetic acceleration is used to iteratively solve the resulting system of equations. This RTE solver can accurately and efficiently predict ballistic as well as diffusion limited transport regimes which could simultaneously exist in small animals. Furthermore, the solver provides accurate solutions on unstructured, tetrahedral grids with relatively large element sizes as compared to commonly employed solvers that use step differencing. The predictions of the solver are validated by a series of frequency-domain, phantom measurements with optical properties ranging from diffusion limited to transport limited propagation. Our results demonstrate that the RTE solution consistently matches measurements made under both diffusion and transport-limited conditions. This work demonstrates the use of an appropriate RTE solver for deployment in small animal optical tomography. PMID:17278821

  1. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  2. Update On the Status of the FLUKA Monte Carlo Transport Code*

    NASA Technical Reports Server (NTRS)

    Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.

    2006-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.

  3. Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study

    SciTech Connect

    Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon

    2012-03-15

    Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.

  4. Time series analysis of Monte Carlo neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Nease, Brian Robert

    A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

  5. Monte Carlo Simulations on the Thermoelectric Transport Properties of Width-Modulated Nanowires

    NASA Astrophysics Data System (ADS)

    Zianni, X.

    2016-03-01

    We performed Monte Carlo simulations on the electron and phonon transport properties of Si nanowires with constant widths and of nanowires modulated by a constriction. We discuss and compare the transport properties and the thermoelectric efficiency in the nanowires. An overall figure of merit ( ZT) enhancement is predicted compared to the corresponding non-modulated nanowires. The ZT enhancement in thick, modulated nanowires has been found comparable to that in thin, non-modulated nanowires.

  6. Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin

    2010-03-01

    Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.

  7. MC21 v.6.0 - A Continuous-Energy Monte Carlo Particle Transport Code with Integrated Reactor Feedback Capabilities

    NASA Astrophysics Data System (ADS)

    Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

    2014-06-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

  8. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  9. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications

    NASA Astrophysics Data System (ADS)

    Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan

    2015-09-01

    The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.

  10. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications.

    PubMed

    Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan

    2015-09-01

    The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866

  11. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Bouchard, Hugo; Bielajew, Alex

    2015-07-01

    To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fanos theorem. Additionally, Lewis approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fanos and Lewis approaches are stated in this new equation. Fanos theorem is found not to apply in the presence of electromagnetic fields. Lewis theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.

  12. A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

    SciTech Connect

    Keady, K P; Brantley, P

    2010-03-04

    Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

  13. The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca

    2008-09-01

    In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.

  14. Light transport and lasing in complex photonic structures

    NASA Astrophysics Data System (ADS)

    Liew, Seng Fatt

    Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.

  15. Electron transport in radiotherapy using local-to-global Monte Carlo

    SciTech Connect

    Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.; Neuenschwander, H.; Mackie, T.R.; Reckwerdt, P.J.

    1994-09-01

    Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ``steps`` to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.

  16. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  17. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    SciTech Connect

    Densmore, Jeffery D. . E-mail: jdd@lanl.gov; Urbatsch, Todd J. . E-mail: tmonster@lanl.gov; Evans, Thomas M. . E-mail: tme@lanl.gov; Buksas, Michael W. . E-mail: mwbuksas@lanl.gov

    2007-03-20

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the DDMC simulation, a quantity that is required to calculate correct fluid motion in coupled radiation-hydrodynamics problems. With a set of numerical examples, we demonstrate that our improved DDMC method is accurate and can provide efficiency gains of several orders of magnitude over standard Monte Carlo.

  18. Radiation dose measurements and Monte Carlo calculations for neutron and photon reactions in a human head phantom for accelerator-based boron neutron capture therapy

    NASA Astrophysics Data System (ADS)

    Kim, Don-Soo

    Dose measurements and radiation transport calculations were investigated for the interactions within the human brain of fast neutrons, slow neutrons, thermal neutrons, and photons associated with accelerator-based boron neutron capture therapy (ABNCT). To estimate the overall dose to the human brain, it is necessary to distinguish the doses from the different radiation sources. Using organic scintillators, human head phantom and detector assemblies were designed, constructed, and tested to determine the most appropriate dose estimation system to discriminate dose due to the different radiation sources that will ultimately be incorporated into a human head phantom to be used for dose measurements in ABNCT. Monoenergetic and continuous energy neutrons were generated via the 7Li(p,n)7Be reaction in a metallic lithium target near the reaction threshold using the 5.5 MV Van de Graaff accelerator at the University of Massachusetts Lowell. A human head phantom was built to measure and to distinguish the doses which result from proton recoils induced by fast neutrons, alpha particles and recoil lithium nuclei from the 10B(n,alpha)7Li reaction, and photons generated in the 7Li accelerator target as well as those generated inside the head phantom through various nuclear reactions at the same time during neutron irradiation procedures. The phantom consists of two main parts to estimate dose to tumor and dose to healthy tissue as well: a 3.22 cm3 boron loaded plastic scintillator which simulates a boron containing tumor inside the brain and a 2664 cm3 cylindrical liquid scintillator which represents the surrounding healthy tissue in the head. The Monte Carlo code MCNPX(TM) was used for the simulation of radiation transport due to neutrons and photons and extended to investigate the effects of neutrons and other radiation on the brain at various depths.

  19. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  20. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

  1. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663

  2. Minimizing the cost of splitting in Monte Carlo radiation transport simulation

    NASA Astrophysics Data System (ADS)

    Juzaitis, R. J.

    1980-10-01

    A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated) as the product of sample variance and time per particle history, sigma (2)/sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

  3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    SciTech Connect

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.

  4. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGESBeta

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  5. Monte Carlo-drift-diffusion simulation of electron current transport in III-N LEDs

    NASA Astrophysics Data System (ADS)

    Kivisaari, Pyry; Sadi, Toufik; Oksanen, Jani; Tulkki, Jukka

    2014-03-01

    Performance of III-N based solid-state lighting is to a large extent limited by current transport effects that are also expected to contribute to the efficiency droop in real devices. To enable studying the contributions of electron transport in drooping more accurately, we develop and study a coupled Monte Carlo-drift-diffusion (MCDD) method to model the details of electron current transport in III-N optoelectronic devices. In the MCDD method, electron and hole distributions are first simulated by solving the standard drift-diffusion (DD) equations. The hole density and recombination rate density obtained from solving the DD equations are used as inputs in the Monte Carlo (MC) simulation of the electron system. The MC simulation involves solving the Boltzmann transport equation for the electron gas to accurately describe electron transport. As a hybrid of the DD and MC methods, the MCDD represents a first-order correction for electron transport in III-N LEDs as compared to DD, predicting a significant hot electron population in the simulated multi-quantum well (MQW) LED device at strong injection.

  6. MONTE CARLO PARTICLE TRANSPORT IN MEDIA WITH EXPONENTIALLY VARYING TIME-DEPENDENT CROSS-SECTIONS

    SciTech Connect

    F. BROWN; W. MARTIN

    2001-02-01

    A probability density function (PDF) and random sampling procedure for the distance to collision were derived for the case of exponentially varying cross-sections. Numerical testing indicates that both are correct. This new sampling procedure has direct application in a new method for Monte Carlo radiation transport, and may be generally useful for analyzing physical problems where the material cross-sections change very rapidly in an exponential manner.

  7. Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program

    SciTech Connect

    Moskowitz, B.S.

    2000-02-01

    This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.

  8. Design and commissioning of the photon monitors and optical transport lines for the advanced photon source positron accumulator ring

    SciTech Connect

    Berg, W.; Yang, B.; Lumpkin, A.; Jones, J.

    1996-12-31

    Two photon monitors have been designed and installed in the positron accumulator ring (PAR) of the Advanced Photon Source. The photon monitors characterize the beam`s transverse profile, bunch length, emittance, and energy spread in a nonintrusive manner. An optical transport line delivers synchrotron light from the PAR out of a high radiation environment. Both charge-coupled device and fast-gated, intensified cameras are used to measure the transverse beam profile (0.11 - 1 mm for damped beam) with a resolution of 0.06 mm. A streak camera ({theta}{sub {tau}} =I ps) is used to measure the bunch length which is in the range of 0.3-1 ns. The design of the various transport components and commissioning results of the photon monitors will be discussed.

  9. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    NASA Astrophysics Data System (ADS)

    Dixon, D. A.; Prinja, A. K.; Franke, B. C.

    2015-09-01

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  10. Magnetic confinement of electron and photon radiotherapy dose: A Monte Carlo simulation with a nonuniform longitudinal magnetic field

    SciTech Connect

    Chen Yu; Bielajew, Alex F.; Litzenberg, Dale W.; Moran, Jean M.; Becchetti, Frederick D.

    2005-12-15

    It recently has been shown experimentally that the focusing provided by a longitudinal nonuniform high magnetic field can significantly improve electron beam dose profiles. This could permit precise targeting of tumors near critical areas and minimize the radiation dose to surrounding healthy tissue. The experimental results together with Monte Carlo simulations suggest that the magnetic confinement of electron radiotherapy beams may provide an alternative to proton or heavy ion radiation therapy in some cases. In the present work, the external magnetic field capability of the Monte Carlo code PENELOPE was utilized by providing a subroutine that modeled the actual field produced by the solenoid magnet used in the experimental studies. The magnetic field in our simulation covered the region from the vacuum exit window to the phantom including surrounding air. In a longitudinal nonuniform magnetic field, it is observed that the electron dose can be focused in both the transverse and longitudinal directions. The measured dose profiles of the electron beam are generally reproduced in the Monte Carlo simulations to within a few percent in the region of interest provided that the geometry and the energy of the incident electron beam are accurately known. Comparisons for the photon beam dose profiles with and without the magnetic field are also made. The experimental results are qualitatively reproduced in the simulation. Our simulation shows that the excessive dose at the beam entrance is due to the magnetic field trapping and focusing scattered secondary electrons that were produced in the air by the incident photon beam. The simulations also show that the electron dose profile can be manipulated by the appropriate control of the beam energy together with the strength and displacement of the longitudinal magnetic field.

  11. Multigroup Boltzmann Fokker Planck electron-photon transport capability in MCNP{sup trademark}

    SciTech Connect

    Adams, K.J.; Hart, M.

    1995-07-01

    The MCNP code system has a robust multigroup transport capability which includes a multigroup Boltzmann-Fokker-Planck (MGBFP) transport algorithm to perform coupled electron-photon or other coupled charged and neutral particle transport in either a forward or adjoint mode. This paper will discuss this capability and compare code results with other transport codes.

  12. Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.

    2003-01-01

    Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.

  13. Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Yu, Deliang; Yan, Longwen; Zhong, Guangwu; Lu, Jie; Yi, Ping

    2007-04-01

    A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport. The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas. The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code, which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks. The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

  14. Exponentially-convergent Monte Carlo for the 1-D transport equation

    SciTech Connect

    Peterson, J. R.; Morel, J. E.; Ragusa, J. C.

    2013-07-01

    We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

  15. Monte Carlo simulation of photonic state tomography: a virtual Hanbury Brown and Twiss correlator

    NASA Astrophysics Data System (ADS)

    Murray, Eoin; Juska, Gediminas; Pelucchi, Emanuele

    2016-05-01

    This paper provides a theoretical background for the simulations of particular quantum optics experiments, namely, photon intensity correlation measurements. A practical example, adapted to polarisation-entangled photon pairs emitted from a quantum dot, is presented. The tool, a virtual Hanbury Brown and Twiss correlator, simulates polarisation-resolved the second-order correlation functions, which then can be used in a photonic state tomography procedure—a full description of a light source’s polarisation state. This educational tool is meant to improve general understanding of such quantum optics experiments.

  16. Monte Carlo study of the energy and angular dependence of the response of plastic scintillation detectors in photon beams

    PubMed Central

    Wang, Lilie L. W.; Klein, David; Beddar, A. Sam

    2010-01-01

    Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs’ responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At ±1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level. PMID:21089762

  17. Combined modulated electron and photon beams planned by a Monte-Carlo-based optimization procedure for accelerated partial breast irradiation

    NASA Astrophysics Data System (ADS)

    Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio

    2012-03-01

    The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.

  18. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards

    NASA Astrophysics Data System (ADS)

    Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

    2014-06-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.

  19. Cavity-photon-switched coherent transient transport in a double quantum waveguide

    SciTech Connect

    Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei

    2014-12-21

    We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.

  20. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    SciTech Connect

    Lee, S.R.; Cummings, J.C.; Nolen, S.D. |

    1997-05-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.

  1. A bone composition model for Monte Carlo x-ray transport simulations

    SciTech Connect

    Zhou Hu; Keall, Paul J.; Graves, Edward E.

    2009-03-15

    In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.

  2. Topological Photonic Quasicrystals: Fractal Topological Spectrum and Protected Transport

    NASA Astrophysics Data System (ADS)

    Bandres, Miguel A.; Rechtsman, Mikael C.; Segev, Mordechai

    2016-01-01

    We show that it is possible to have a topological phase in two-dimensional quasicrystals without any magnetic field applied, but instead introducing an artificial gauge field via dynamic modulation. This topological quasicrystal exhibits scatter-free unidirectional edge states that are extended along the system's perimeter, contrary to the states of an ordinary quasicrystal system, which are characterized by power-law decay. We find that the spectrum of this Floquet topological quasicrystal exhibits a rich fractal (self-similar) structure of topological "minigaps," manifesting an entirely new phenomenon: fractal topological systems. These topological minigaps form only when the system size is sufficiently large because their gapless edge states penetrate deep into the bulk. Hence, the topological structure emerges as a function of the system size, contrary to periodic systems where the topological phase can be completely characterized by the unit cell. We demonstrate the existence of this topological phase both by using a topological index (Bott index) and by studying the unidirectional transport of the gapless edge states and its robustness in the presence of defects. Our specific model is a Penrose lattice of helical optical waveguides—a photonic Floquet quasicrystal; however, we expect this new topological quasicrystal phase to be universal.

  3. High-speed evaluation of track-structure Monte Carlo electron transport simulations.

    PubMed

    Pasciak, A S; Ford, J R

    2008-10-01

    There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy. PMID:18780958

  4. Ion beam transport in tissue-like media using the Monte Carlo code SHIELD-HIT.

    PubMed

    Gudowska, Irena; Sobolevsky, Nikolai; Andreo, Pedro; Belki?, Dzevad; Brahme, Anders

    2004-05-21

    The development of the Monte Carlo code SHIELD-HIT (heavy ion transport) for the simulation of the transport of protons and heavier ions in tissue-like media is described. The code SHIELD-HIT, a spin-off of SHIELD (available as RSICC CCC-667), extends the transport of hadron cascades from standard targets to that of ions in arbitrary tissue-like materials, taking into account ionization energy-loss straggling and multiple Coulomb scattering effects. The consistency of the results obtained with SHIELD-HIT has been verified against experimental data and other existing Monte Carlo codes (PTRAN, PETRA), as well as with deterministic models for ion transport, comparing depth distributions of energy deposition by protons, 12C and 20Ne ions impinging on water. The SHIELD-HIT code yields distributions consistent with a proper treatment of nuclear inelastic collisions. Energy depositions up to and well beyond the Bragg peak due to nuclear fragmentations are well predicted. Satisfactory agreement is also found with experimental determinations of the number of fragments of a given type, as a function of depth in water, produced by 12C and 14N ions of 670 MeV u(-1), although less favourable agreement is observed for heavier projectiles such as 16O ions of the same energy. The calculated neutron spectra differential in energy and angle produced in a mimic of a Martian rock by irradiation with 12C ions of 290 MeV u(-1) also shows good agreement with experimental data. It is concluded that a careful analysis of stopping power data for different tissues is necessary for radiation therapy applications, since an incorrect estimation of the position of the Bragg peak might lead to a significant deviation from the prescribed dose in small target volumes. The results presented in this study indicate the usefulness of the SHIELD-HIT code for Monte Carlo simulations in the field of light ion radiation therapy. PMID:15214534

  5. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    NASA Astrophysics Data System (ADS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2016-03-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.

  6. First-passage kinetic Monte Carlo on lattices: Hydrogen transport in lattices with traps

    NASA Astrophysics Data System (ADS)

    von Toussaint, U.; Schwarz-Selinger, T.; Schmid, K.

    2015-08-01

    A new algorithm for the diffusion in 2D and 3D discrete simple cubic lattices based on a recently proposed technique, Green-functions or first-passage kinetic Monte Carlo has been developed. It is based on the solutions of appropriately chosen Greens functions, which propagate the diffusing atoms over long distances in one step (superhops). The speed-up of the new approach over standard kinetic Monte Carlo techniques can be orders of magnitude, depending on the problem. Using this new algorithm we simulated recent hydrogen isotope exchange experiments in recrystallized tungsten at 320 K, initially loaded with deuterium. It was found that the observed depth profiles can only be explained with 'active' traps, i.e. traps capable of exchanging atoms with activation energies significantly lower than the actual trap energy. Such a mechanism has so far not been considered in the modeling of hydrogen transport.

  7. An object-oriented implementation of a parallel Monte Carlo code for radiation transport

    NASA Astrophysics Data System (ADS)

    Santos, Pedro Duarte; Lani, Andrea

    2016-05-01

    This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.

  8. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model

    PubMed Central

    Nguyen, Jennifer; Hayakawa, Carole K.; Mourant, Judith R.; Venugopalan, Vasan; Spanier, Jerome

    2016-01-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642

  9. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome

    2016-05-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides. PMID:27231642

  10. METHES: A Monte Carlo collision code for the simulation of electron transport in low temperature plasmas

    NASA Astrophysics Data System (ADS)

    Rabie, M.; Franck, C. M.

    2016-06-01

    We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.

  11. On Monte Carlo modeling of megavoltage photon beams: A revisited study on the sensitivity of beam parameters

    SciTech Connect

    Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie

    2011-01-15

    Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).

  12. SAF values for internal photon emitters calculated for the RPI-P pregnant-female models using Monte Carlo methods

    SciTech Connect

    Shi, C. Y.; Xu, X. George; Stabin, Michael G.

    2008-07-15

    Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

  13. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    PubMed Central

    Jabbari, Keyvan; Seuntjens, Jan

    2014-01-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

  14. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

    PubMed

    Jabbari, Keyvan; Seuntjens, Jan

    2014-07-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

  15. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations.

    PubMed

    Ulmer, W; Pyyry, J; Kaissl, W

    2005-04-21

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes < or =5.5 cm(2) and densities < or = 0.25 g cm(-3), in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. PMID:15815095

  16. Monte Carlo evaluation of electron transport in heterojunction bipolar transistor base structures

    NASA Astrophysics Data System (ADS)

    Maziar, C. M.; Klausmeier-Brown, M. E.; Bandyopadhyay, S.; Lundstrom, M. S.; Datta, S.

    1986-07-01

    Electron transport through base structures of Al(x)Ga(1-x)As heterojunction bipolar transistors is evaluated by Monte Carlo simulation. Simulation results demonstrate the effectiveness of both ballistic launching ramps and graded bases for reducing base transit time. Both techniques are limited, however, in their ability to maintain short transit times across the wide bases that are desirable for reduction of base resistance. Simulation results demonstrate that neither technique is capable of maintaining a 1-ps transit time across a 0.25-micron base. The physical mechanisms responsible for limiting the performance of each structure are identified and a promising hybrid structure is described.

  17. Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation.

    PubMed

    Jacques, Steven L

    2014-12-01

    The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-μm-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426

  18. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.

    PubMed

    Piels, Molly; Zibar, Darko

    2016-02-01

    The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation. PMID:26906783

  19. Diagnostic and Impact Estimation of Nuclear Data Inconsistencies on Energy Deposition Calculations in Coupled Neutron-Photon Monte-Carlo Simulation, with TRIPOLI-4®

    NASA Astrophysics Data System (ADS)

    Péron, A.; Malouch, F.; Zoia, A.; Diop, C. M.

    2014-06-01

    Nuclear heating evaluation by Monte-Carlo simulation requires coupled neutron-photon calculation so as to take into account the contribution of secondary photons. Nuclear data are essential for a good calculation of neutron and photon energy deposition and for secondary photon generation. However, a number of isotopes of the most common nuclear data libraries happen to be affected by energy and/or momentum conservation errors concerning the photon production or inaccurate thresholds for photon emission sections. In this paper, we perform a comprehensive survey of the three evaluations JEFF3.1.1, JEFF3.2T2 (beta version) and ENDF/B-VII.1, over 142 isotopes. The aim of this survey is, on the one hand, to check the existence of photon production data by neutron reaction and, on the other hand, to verify the consistency of these data using the kinematic limits method recently implemented in the TRIPOLI-4 Monte-Carlo code, developed by CEA (Saclay center). Then, the impact of these inconsistencies affecting energy deposition scores has been estimated for two materials using a specific nuclear heating calculation scheme in the context of the OSIRIS Material Testing Reactor (CEA/Saclay).

  20. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.

    PubMed

    Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E

    2016-05-01

    The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978

  1. Evaluation of PENFAST--a fast Monte Carlo code for dose calculations in photon and electron radiotherapy treatment planning.

    PubMed

    Habib, B; Poumarede, B; Tola, F; Barthe, J

    2010-01-01

    The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258

  2. A direction-selective flattening filter for clinical photon beams. Monte Carlo evaluation of a new concept

    NASA Astrophysics Data System (ADS)

    Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

    2011-07-01

    A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).

  3. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere

    SciTech Connect

    Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.

    2008-07-15

    Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.

  4. Output correction factors for nine small field detectors in 6 MV radiation therapy photon beams: A PENELOPE Monte Carlo study

    SciTech Connect

    Benmakhlouf, Hamza; Sempau, Josep; Andreo, Pedro

    2014-04-15

    Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.

  5. Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC

    SciTech Connect

    Sun, C. C.; You, A. H.; Wong, E. K.

    2010-07-07

    The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.

  6. Monte Carlo simulations of electron transport for electron beam-induced deposition of nanostructures

    NASA Astrophysics Data System (ADS)

    Salvat-Pujol, Francesc; Jeschke, Harald O.; Valenti, Roser

    2013-03-01

    Tungsten hexacarbonyl, W(CO)6, is a particularly interesting precursor molecule for electron beam-induced deposition of nanoparticles, since it yields deposits whose electronic properties can be tuned from metallic to insulating. However, the growth of tungsten nanostructures poses experimental difficulties: the metal content of the nanostructure is variable. Furthermore, fluctuations in the tungsten content of the deposits seem to trigger the growth of the nanostructure. Monte Carlo simulations of electron transport have been carried out with the radiation-transport code Penelope in order to study the charge and energy deposition of the electron beam in the deposit and in the substrate. These simulations allow us to examine the conditions under which nanostructure growth takes place and to highlight the relevant parameters in the process.

  7. A full-band Monte Carlo model for hole transport in silicon

    NASA Astrophysics Data System (ADS)

    Jallepalli, S.; Rashed, M.; Shih, W.-K.; Maziar, C. M.; Tasch, A. F., Jr.

    1997-03-01

    Hole transport in bulk silicon is explored using an efficient and accurate Monte Carlo (MC) tool based on the local pseudopotential band structure. Acoustic and optical phonon scattering, ionized impurity scattering, and impact ionization are the dominant scattering mechanisms that have been included. In the interest of computational efficiency, momentum relaxation times have been used to describe ionized impurity scattering and self-scattering rates have been computed in a dynamic fashion. The temperature and doping dependence of low-field hole mobility is obtained and good agreement with experimental data has been observed. MC extracted impact ionization coefficients are also shown to agree well with published experimental data. Momentum and energy relaxation times are obtained as a function of the average hole energy for use in moment based hydrodynamic simulators. The MC model is suitable for studying both low-field and high-field hole transport in silicon.

  8. Hybrid two-dimensional Monte-Carlo electron transport in self-consistent electromagnetic fields

    SciTech Connect

    Mason, R.J.; Cranfill, C.W.

    1985-01-01

    The physics and numerics of the hybrid electron transport code ANTHEM are described. The need for the hybrid modeling of laser generated electron transport is outlined, and a general overview of the hybrid implementation in ANTHEM is provided. ANTHEM treats the background ions and electrons in a laser target as coupled fluid components moving relative to a fixed Eulerian mesh. The laser converts cold electrons to an additional hot electron component which evolves on the mesh as either a third coupled fluid or as a set of Monte Carlo PIC particles. The fluids and particles move in two-dimensions through electric and magnetic fields calculated via the Implicit Moment method. The hot electrons are coupled to the background thermal electrons by Coulomb drag, and both the hot and cold electrons undergo Rutherford scattering against the ion background. Subtleties of the implicit E- and B-field solutions, the coupled hydrodynamics, and large time step Monte Carlo particle scattering are discussed. Sample applications are presented.

  9. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  10. Multilevel Monte Carlo for Two Phase Flow and Transport in a Subsurface Reservoir with Random Permeability

    NASA Astrophysics Data System (ADS)

    Müller, Florian; Jenny, Patrick; Daniel, Meyer

    2014-05-01

    To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.

  11. Single photon transport along a one-dimensional waveguide with a side manipulated cavity QED system.

    PubMed

    Yan, Cong-Hua; Wei, Lian-Fu

    2015-04-20

    An external mirror coupling to a cavity with a two-level atom inside is put forward to control the photon transport along a one-dimensional waveguide. Using a full quantum theory of photon transport in real space, it is shown that the Rabi splittings of the photonic transmission spectra can be controlled by the cavity-mirror couplings; the splittings could still be observed even when the cavity-atom system works in the weak coupling regime, and the transmission probability of the resonant photon can be modulated from 0 to 100%. Additionally, our numerical results show that the appearance of Fano resonance is related to the strengths of the cavity-mirror coupling and the dissipations of the system. An experimental demonstration of the proposal with the current photonic crystal waveguide technique is suggested. PMID:25969078

  12. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William

    2011-01-01

    A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.

  13. Unified single-photon and single-electron counting statistics: From cavity QED to electron transport

    SciTech Connect

    Lambert, Neill; Chen, Yueh-Nan; Nori, Franco

    2010-12-15

    A key ingredient of cavity QED is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultrashort laser pulses allows one to use such a system as a source of single photons--a vital ingredient in quantum information and optical computing schemes. Here we analyze and time-adjust the photon-counting statistics of such a single-photon source and show that the photon statistics can be described by a simple transport-like nonequilibrium model. We then show that there is a one-to-one correspondence of this model to that of nonequilibrium transport of electrons through a double quantum dot nanostructure, unifying the fields of photon-counting statistics and electron-transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron-transport systems (e.g., super-Poissonian shot noise and an extension of the Leggett-Garg inequality) to single-photon-source experiments.

  14. Monte Carlo model of the transport in the atmosphere of relativistic electrons and ?-rays associated to TGF

    NASA Astrophysics Data System (ADS)

    Sarria, D.; Forme, F.; Blelly, P.

    2013-12-01

    Onboard TARANIS satellite, the CNES mission dedicated to the study of TLE and TGFs, IDEE and XGRE are the two instruments which will measure the relativistic electrons and X and gamma rays. At the altitude of the satellite, the fluxes have been significantly altered by the filtering of the atmosphere and the satellite only measures a subset of the particles. Therefore, the inverse problem, to get an information on the sources and on the mechanisms responsible for these emissions, is rather tough to tackle, especially if we want to take advantage of the other instruments which will provide indirect information on those particles. The only reasonable way to solve this problem is to embed in the data processing, a theoretical approach using a numerical model of the generation and the transport of these burst emissions. For this purpose, we start to develop a numerical Monte carlo model which solves the transport in the atmosphere of both relativistic electrons and gamma-rays. After a brief presentation of the model and the validation by comparison with GEANT 4, we discuss how the photons and electrons may be spatially dispersed as a function of their energy at the altitude of the satellite, depending on the source properties, and the impact that could have on the detection by the satellite. Then, we give preliminary results on the interaction of the energetic particles with the neutral atmosphere, mainly in term of production rate of excited states, which will accessible through MCP experiment, and ionized species, which are important for the electrodynamics.

  15. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    NASA Astrophysics Data System (ADS)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  16. Jet transport and photon bremsstrahlung via longitudinal and transverse scattering

    NASA Astrophysics Data System (ADS)

    Qin, Guang-You; Majumder, Abhijit

    2015-04-01

    We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.

  17. Ballistic transport in one-dimensional random dimer photonic crystals

    NASA Astrophysics Data System (ADS)

    Cherid, Samira; Bentata, Samir; Zitouni, Ali; Djelti, Radouan; Aziz, Zoubir

    2014-04-01

    Using the transfer-matrix technique and the Kronig Penney model, we numerically and analytically investigate the effect of short-range correlated disorder in Random Dimer Model (RDM) on transmission properties of the light in one dimensional photonic crystals made of three different materials. Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that one kind of these layers always appear in pairs. It is shown that the one dimensional random dimer photonic crystals support two types of extended modes. By shifting of the dimer resonance toward the host fundamental stationary resonance state, we demonstrate the existence of the ballistic response in these systems.

  18. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements

    NASA Astrophysics Data System (ADS)

    Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David

    2015-07-01

    Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.

  19. Neoclassical electron transport calculation by using {delta}f Monte Carlo method

    SciTech Connect

    Matsuoka, Seikichi; Satake, Shinsuke; Yokoyama, Masayuki; Wakasa, Arimitsu; Murakami, Sadayoshi

    2011-03-15

    High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on {delta}f Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (T{sub e}(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

  20. Neoclassical electron transport calculation by using δf Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Matsuoka, Seikichi; Satake, Shinsuke; Yokoyama, Masayuki; Wakasa, Arimitsu; Murakami, Sadayoshi

    2011-03-01

    High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on δf Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (Te(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

  1. 3D electro-thermal Monte Carlo study of transport in confined silicon devices

    NASA Astrophysics Data System (ADS)

    Mohamed, Mohamed Y.

    The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non-equilibrium carrier dynamics, a key determinant in device performance. The dependence of short-channel effects and Joule heating on the lateral scaling of the cross-section is specifically explored in this work. Finally, this dissertation studies the basic tradeoff between various n-channel multigate architectures with square cross-sectional lengths ranging from 30 nm to 5 nm are presented.

  2. Monte Carlo analysis of high-frequency non-equilibrium transport in mercury-cadmium-telluride for infrared detection

    NASA Astrophysics Data System (ADS)

    Palermo, Christophe; Varani, Luca; Vaissière, Jean-Claude

    2004-04-01

    We present a theoretical analysis of both static and small-signal electron transport in Hg0.8Cd0.2Te in order to study the high-frequency behaviour of this material usually employed for infrared detection. Firstly we simulate static conditions by using a Monte Carlo simulation in order to extract transport parameters. Then, an analytical method based on hydrodynamic equations is used to perform the small-signal study by modelling the high-frequency differential mobility. This approach allows a full study of the frequency response for arbitrary electric fields starting only from static parameters and to overcome technical problems of direct Monte Carlo simulations.

  3. Massively parallel kinetic Monte Carlo simulations of charge carrier transport in organic semiconductors

    NASA Astrophysics Data System (ADS)

    van der Kaap, N. J.; Koster, L. J. A.

    2016-02-01

    A parallel, lattice based Kinetic Monte Carlo simulation is developed that runs on a GPGPU board and includes Coulomb like particle-particle interactions. The performance of this computationally expensive problem is improved by modifying the interaction potential due to nearby particle moves, instead of fully recalculating it. This modification is achieved by adding dipole correction terms that represent the particle move. Exact evaluation of these terms is guaranteed by representing all interactions as 32-bit floating numbers, where only the integers between -222 and 222 are used. We validate our method by modelling the charge transport in disordered organic semiconductors, including Coulomb interactions between charges. Performance is mainly governed by the particle density in the simulation volume, and improves for increasing densities. Our method allows calculations on large volumes including particle-particle interactions, which is important in the field of organic semiconductors.

  4. Monte Carlo charge transport and photoemission from negative electron affinity GaAs photocathodes

    NASA Astrophysics Data System (ADS)

    Karkare, Siddharth; Dimitrov, Dimitre; Schaff, William; Cultrera, Luca; Bartnik, Adam; Liu, Xianghong; Sawyer, Eric; Esposito, Teresa; Bazarov, Ivan

    2013-03-01

    High quantum yield, low transverse energy spread, and prompt response time make GaAs activated to negative electron affinity an ideal candidate for a photocathode in high brightness photoinjectors. Even after decades of investigation, the exact mechanism of electron emission from GaAs is not well understood. Here, photoemission from such photocathodes is modeled using detailed Monte Carlo electron transport simulations. Simulations show a quantitative agreement with the experimental results for quantum efficiency, energy distributions of emitted electrons, and response time without the assumption of any ad hoc parameters. This agreement between simulation and experiment sheds light on the mechanism of electron emission and provides an opportunity to design novel semiconductor photocathodes with optimized performance.

  5. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    SciTech Connect

    Romano, Paul K; Brown, Forrest B; Forget, Benoit

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  6. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  7. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  8. Monte Carlo modeling of transport in PbSe nanocrystal films

    SciTech Connect

    Carbone, I. Carter, S. A.; Zimanyi, G. T.

    2013-11-21

    A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5 nm and begin to decrease above 6 nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.

  9. Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport

    NASA Astrophysics Data System (ADS)

    Manalo, K.; Chin, M.; Sjoden, G.

    2014-06-01

    Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.

  10. MCNPX Monte Carlo simulations of particle transport in SiC semiconductor detectors of fast neutrons

    NASA Astrophysics Data System (ADS)

    Sedlačková, K.; Zat'ko, B.; Šagátová, A.; Pavlovič, M.; Nečas, V.; Stacho, M.

    2014-05-01

    The aim of this paper was to investigate particle transport properties of a fast neutron detector based on silicon carbide. MCNPX (Monte Carlo N-Particle eXtended) code was used in our study because it allows seamless particle transport, thus not only interacting neutrons can be inspected but also secondary particles can be banked for subsequent transport. Modelling of the fast-neutron response of a SiC detector was carried out for fast neutrons produced by 239Pu-Be source with the mean energy of about 4.3 MeV. Using the MCNPX code, the following quantities have been calculated: secondary particle flux densities, reaction rates of elastic/inelastic scattering and other nuclear reactions, distribution of residual ions, deposited energy and energy distribution of pulses. The values of reaction rates calculated for different types of reactions and resulting energy deposition values showed that the incident neutrons transfer part of the carried energy predominantly via elastic scattering on silicon and carbon atoms. Other fast-neutron induced reactions include inelastic scattering and nuclear reactions followed by production of α-particles and protons. Silicon and carbon recoil atoms, α-particles and protons are charged particles which contribute to the detector response. It was demonstrated that although the bare SiC material can register fast neutrons directly, its detection efficiency can be enlarged if it is covered by an appropriate conversion layer. Comparison of the simulation results with experimental data was successfully accomplished.

  11. Monte Carlo simulation of ballistic transport in high-mobility channels

    NASA Astrophysics Data System (ADS)

    Sabatini, G.; Marinchio, H.; Palermo, C.; Varani, L.; Daoud, T.; Teissier, R.; Rodilla, H.; Gonzlez, T.; Mateos, J.

    2009-11-01

    By means of Monte Carlo simulations coupled with a two-dimensional Poisson solver, we evaluate directly the possibility to use high mobility materials in ultra fast devices exploiting ballistic transport. To this purpose, we have calculated specific physical quantities such as the transit time, the transit velocity, the free flight time and the mean free path as functions of applied voltage in InAs channels with different lengths, from 2000 nm down to 50 nm. In this way the transition from diffusive to ballistic transport is carefully described. We remark a high value of the mean transit velocity with a maximum of 14105 m/s for a 50 nm-long channel and a transit time shorter than 0.1 ps, corresponding to a cutoff frequency in the terahertz domain. The percentage of ballistic electrons and the number of scatterings as functions of distance are also reported, showing the strong influence of quasi-ballistic transport in the shorter channels.

  12. Gel dosimetry measurements and Monte Carlo modeling for external radiotherapy photon beams: Comparison with a treatment planning system dose distribution

    NASA Astrophysics Data System (ADS)

    Valente, M.; Aon, E.; Brunetto, M.; Castellano, G.; Gallivanone, F.; Gambarini, G.

    2007-09-01

    Gel dosimetry has proved to be useful to determine absorbed dose distributions in radiotherapy, as well as to validate treatment plans. Gel dosimetry allows dose imaging and is particularly helpful for non-uniform dose distribution measurements, as may occur when multiple-field irradiation techniques are employed. In this work, we report gel-dosimetry measurements and Monte Carlo (PENELOPE ®) calculations for the dose distribution inside a tissue-equivalent phantom exposed to a typical multiple-field irradiation. Irradiations were performed with a 10 MV photon beam from a Varian ® Clinac 18 accelerator. The employed dosimeters consisted of layers of Fricke Xylenol Orange radiochromic gel. The method for absorbed dose imaging was based on analysis of visible light transmittance, usually detected by means of a CCD camera. With the aim of finding a simple method for light transmittance image acquisition, a commercial flatbed-like scanner was employed. The experimental and simulated dose distributions have been compared with those calculated with a commercially available treatment planning system, showing a reasonable agreement.

  13. Mesh-based Monte Carlo method for fibre-optic optogenetic neural stimulation with direct photon flux recording strategy.

    PubMed

    Shin, Younghoon; Kwon, Hyuk-Sang

    2016-03-21

    We propose a Monte Carlo (MC) method based on a direct photon flux recording strategy using inhomogeneous, meshed rodent brain atlas. This MC method was inspired by and dedicated to fibre-optics-based optogenetic neural stimulations, thus providing an accurate and direct solution for light intensity distributions in brain regions with different optical properties. Our model was used to estimate the 3D light intensity attenuation for close proximity between an implanted optical fibre source and neural target area for typical optogenetics applications. Interestingly, there are discrepancies with studies using a diffusion-based light intensity prediction model, perhaps due to use of improper light scattering models developed for far-field problems. Our solution was validated by comparison with the gold-standard MC model, and it enabled accurate calculations of internal intensity distributions in an inhomogeneous near light source domain. Thus our strategy can be applied to studying how illuminated light spreads through an inhomogeneous brain area, or for determining the amount of light required for optogenetic manipulation of a specific neural target area. PMID:26914289

  14. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa

    2016-01-01

    Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated (5~\\text{mm}× 5 mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging.

  15. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry.

    PubMed

    Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa

    2016-01-01

    Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated ([Formula: see text] mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging. PMID:26630437

  16. Understanding the lateral dose response functions of high-resolution photon detectors by reverse Monte Carlo and deconvolution analysis.

    PubMed

    Looe, Hui Khee; Harder, Dietrich; Poppe, Björn

    2015-08-21

    The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311

  17. Bone and mucosal dosimetry in skin radiation therapy: a Monte Carlo study using kilovoltage photon and megavoltage electron beams.

    PubMed

    Chow, James C L; Jiang, Runqing

    2012-06-21

    This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle(3) treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6 MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (d(max)) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose. PMID:22642985

  18. Bone and mucosal dosimetry in skin radiation therapy: a Monte Carlo study using kilovoltage photon and megavoltage electron beams

    NASA Astrophysics Data System (ADS)

    Chow, James C. L.; Jiang, Runqing

    2012-06-01

    This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6 MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.

  19. Observing gas and dust in simulations of star formation with Monte Carlo radiation transport on Voronoi meshes

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Ercolano, B.; Dale, J.

    2016-02-01

    Ionizing feedback from massive stars dramatically affects the interstellar medium local to star-forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionization/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionization and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of a hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields on to Cartesian grids before performing radiative transfer/photoionization calculations, results in significant errors in the temperature and ionization structure of the region. Furthermore, we describe the mathematical process of tracing photon energy packets through a Voronoi tessellation, including optimizations, treating problematic cases and boundary conditions. We perform various benchmarks using both the original version of MOCASSIN and the modified version using the Voronoi tessellation. We show that for uniform grids, or equivalently a cubic lattice of cell generating points, the new Voronoi version gives the same results as the original Cartesian grid version of MOCASSIN for all benchmarks. For non-uniform initial conditions, such as using snapshots from smoothed particle hydrodynamics simulations, we show that the Voronoi version performs better than the Cartesian grid version, resulting in much better resolution in dense regions.

  20. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in meta-waveguides (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu

    2015-09-01

    Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.

  1. Fractional transport and photonic sub-diffusion in aperiodic dielectric metamaterials

    NASA Astrophysics Data System (ADS)

    Dal Negro, Luca; Wang, Yu; Inampudi, Sandeep

    Using rigorous transfer matrix theory and full-vector Finite Difference Time Domain (FDTD) simulations in combination with Wavelet Transform Modulus Maxima analysis of multifractal spectra, we demonstrate all-dielectric aperiodic metamaterial structures that exhibit sub-diffusive photon transport properties that are widely tunable across the near-infrared spectral range. The proposed approach leverages the unprecedented spectral scalability offered by aperiodic photonic systems and demonstrates the possibility of achieving logarithmic Sinai sub-diffusion of photons for the first time. In particular we will show that the control of multifractal energy spectra and critical modes in aperiodic metamaterials with nanoscale dielectric components enables tuning of anomalous optical transport from sub- to super-diffusive dynamics, in close analogy with the electron dynamics in quasi-periodic potentials. Fractional diffusion equations models will be introduced for the efficient modeling of photon sub-diffusive processes in metamaterials and applications to diffraction-free propagation in aperiodic media will be provided. The ability to tailor photon transport phenomena in metamaterials with properties originating from aperiodic geometrical correlations can lead to novel functionalities and active devices that rely on anomalous photon sub-diffusion to control beam collimation and non-resonantly enhance light-matter interaction across multiple spectral bands.

  2. Technical Note: Study of the electron transport parameters used in PENELOPE for the Monte Carlo simulation of Linac targets

    SciTech Connect

    Rodriguez, Miguel; Sempau, Josep; Brualla, Lorenzo

    2015-06-15

    Purpose: The Monte Carlo simulation of electron transport in Linac targets using the condensed history technique is known to be problematic owing to a potential dependence of absorbed dose distributions on the electron step length. In the PENELOPE code, the step length is partially determined by the transport parameters C1 and C2. The authors have investigated the effect on the absorbed dose distribution of the values given to these parameters in the target. Methods: A monoenergetic 6.26 MeV electron pencil beam from a point source was simulated impinging normally on a cylindrical tungsten target. Electrons leaving the tungsten were discarded. Radial absorbed dose profiles were obtained at 1.5 cm of depth in a water phantom located at 100 cm for values of C1 and C2 in the target both equal to 0.1, 0.01, or 0.001. A detailed simulation case was also considered and taken as the reference. Additionally, lateral dose profiles were estimated and compared with experimental measurements for a 6 MV photon beam of a Varian Clinac 2100 for the cases of C1 and C2 both set to 0.1 or 0.001 in the target. Results: On the central axis, the dose obtained for the case C1 = C2 = 0.1 shows a deviation of (17.2% ± 1.2%) with respect to the detailed simulation. This difference decreases to (3.7% ± 1.2%) for the case C1 = C2 = 0.01. The case C1 = C2 = 0.001 produces a radial dose profile that is equivalent to that of the detailed simulation within the reached statistical uncertainty of 1%. The effect is also appreciable in the crossline dose profiles estimated for the realistic geometry of the Linac. In another simulation, it was shown that the error made by choosing inappropriate transport parameters can be masked by tuning the energy and focal spot size of the initial beam. Conclusions: The use of large path lengths for the condensed simulation of electrons in a Linac target with PENELOPE conducts to deviations of the dose in the patient or phantom. Based on the results obtained in this work, values of C1 and C2 larger than 0.001 should not be used in Linac targets without further investigation.

  3. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.

  4. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy.

    PubMed

    Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B

    2014-11-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the γ-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2 mm and 1%/1 mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381

  5. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  6. Monte Carlo based method for conversion of in-situ gamma ray spectra obtained with a portable Ge detector to an incident photon flux energy distribution.

    PubMed

    Clouvas, A; Xanthos, S; Antonopoulos-Domis, M; Silva, J

    1998-02-01

    A Monte Carlo based method for the conversion of an in-situ gamma-ray spectrum obtained with a portable Ge detector to photon flux energy distribution is proposed. The spectrum is first stripped of the partial absorption and cosmic-ray events leaving only the events corresponding to the full absorption of a gamma ray. Applying to the resulting spectrum the full absorption efficiency curve of the detector determined by calibrated point sources and Monte Carlo simulations, the photon flux energy distribution is deduced. The events corresponding to partial absorption in the detector are determined by Monte Carlo simulations for different incident photon energies and angles using the CERN's GEANT library. Using the detector's characteristics given by the manufacturer as input it is impossible to reproduce experimental spectra obtained with point sources. A transition zone of increasing charge collection efficiency has to be introduced in the simulation geometry, after the inactive Ge layer, in order to obtain good agreement between the simulated and experimental spectra. The functional form of the charge collection efficiency is deduced from a diffusion model. PMID:9450590

  7. Pre-conditioned backward Monte Carlo solutions to radiative transport in planetary atmospheres. Fundamentals: Sampling of propagation directions in polarising media

    NASA Astrophysics Data System (ADS)

    García Muñoz, A.; Mills, F. P.

    2015-01-01

    Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72

  8. Enhancing coherent transport in a photonic network using controllable decoherence

    NASA Astrophysics Data System (ADS)

    Biggerstaff, Devon N.; Heilmann, René; Zecevik, Aidan A.; Gräfe, Markus; Broome, Matthew A.; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G.; Kassal, Ivan

    2016-04-01

    Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence.

  9. Enhancing coherent transport in a photonic network using controllable decoherence.

    PubMed

    Biggerstaff, Devon N; Heilmann, René; Zecevik, Aidan A; Gräfe, Markus; Broome, Matthew A; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G; Kassal, Ivan

    2016-01-01

    Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence. PMID:27080915

  10. Enhancing coherent transport in a photonic network using controllable decoherence

    PubMed Central

    Biggerstaff, Devon N.; Heilmann, René; Zecevik, Aidan A.; Gräfe, Markus; Broome, Matthew A.; Fedrizzi, Alessandro; Nolte, Stefan; Szameit, Alexander; White, Andrew G.; Kassal, Ivan

    2016-01-01

    Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of coherent transport can be enhanced through dynamic interaction between the system and a noisy environment. We report an experimental simulation of environment-assisted coherent transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable-strength decoherence is simulated by broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence. PMID:27080915

  11. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams.

    PubMed

    Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

    2009-12-21

    The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions. PMID:19934489

  12. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams

    NASA Astrophysics Data System (ADS)

    Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

    2009-12-01

    The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% ± 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D γ-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for γ>1 was 4.7% ± 3.7%, the average γ1% value was 1.19 ± 0.16 and the mean 2D γ-value was 0.44 ± 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions.

  13. Adjoint Monte Carlo method for prostate external photon beam treatment planning: an application to 3D patient anatomy

    NASA Astrophysics Data System (ADS)

    Wang, Brian; Goldstein, Moshe; Xu, X. George; Sahoo, Narayan

    2005-03-01

    Recently, the theoretical framework of the adjoint Monte Carlo (AMC) method has been developed using a simplified patient geometry. In this study, we extended our previous work by applying the AMC framework to a 3D anatomical model called VIP-Man constructed from the Visible Human images. First, the adjoint fluxes for the prostate (PTV) and rectum and bladder (organs at risk (OARs)) were calculated on a spherical surface of 1 m radius, centred at the centre of gravity of PTV. An importance ratio, defined as the PTV dose divided by the weighted OAR doses, was calculated for each of the available beamlets to select the beam angles. Finally, the detailed doses in PTV and OAR were calculated using a forward Monte Carlo simulation to include the electron transport. The dose information was then used to generate dose volume histograms (DVHs). The Pinnacle treatment planning system was also used to generate DVHs for the 3D plans with beam angles obtained from the AMC (3D-AMC) and a standard six-field conformal radiation therapy plan (3D-CRT). Results show that the DVHs for prostate from 3D-AMC and the standard 3D-CRT are very similar, showing that both methods can deliver prescribed dose to the PTV. A substantial improvement in the DVHs for bladder and rectum was found for the 3D-AMC method in comparison to those obtained from 3D-CRT. However, the 3D-AMC plan is less conformal than the 3D-CRT plan because only bladder, rectum and PTV are considered for calculating the importance ratios. Nevertheless, this study clearly demonstrated the feasibility of the AMC in selecting the beam directions as a part of a treatment planning based on the anatomical information in a 3D and realistic patient anatomy.

  14. Monte Carlo calculation of electron transport coefficients in counting gas mixtures I. Argon-methane mixtures

    NASA Astrophysics Data System (ADS)

    Fraser, G. W.; Mathieson, E.

    1986-07-01

    We describe a Monte Carlo simulation of electron transport in gas mixtures, under the influence of a uniform, nonionising electric field, E. Our calculations, in contrast with earlier studies of counting gases based on the Boltzmann transport equation, examine the anisotropic nature of electron diffusion. Independent estimators of the transverse and longitudinal diffusion coefficients F and DL are derived. For both pure argon and pure methane, predictions of the model are shown to be in good agreement with measurements of the electron mobility, μ, and of the ratios {D}/{μ}and{D L}/{μ} at field-to-pressure quotients 0.03 ⩽ {E}/{p} ⩽ 2.0 V cm-1Torr-1. Alternative methane cross sections are compared in detail. Finally, our calculations are extended to the mixtures A-10% CH 4. A-20% CH 4 and A-50% CH 4. This is the first part of an extensive study of electron drift and diffusion in common counting gas mixtures.

  15. Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons

    SciTech Connect

    Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.

    2014-10-28

    We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.

  16. Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro

    2015-05-01

    Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300-900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.

  17. Oxygen transport properties estimation by classical trajectory–direct simulation Monte Carlo

    SciTech Connect

    Bruno, Domenico; Frezzotti, Aldo Ghiroldi, Gian Pietro

    2015-05-15

    Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300–900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.

  18. The lower timing resolution bound for scintillators with non-negligible optical photon transport time in time-of-flight PET

    PubMed Central

    Vinke, Ruud; Olcott, Peter D.; Cates, Joshua W.; Levin, Craig S.

    2014-01-01

    In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 × 3 × 20 mm3 LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807

  19. Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water

    NASA Astrophysics Data System (ADS)

    Papamichael, Georgios Ioannis

    A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.

  20. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  1. MCNP: Photon benchmark problems

    SciTech Connect

    Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

    1991-09-01

    The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.

  2. Monte Carlo calculations of initial energies of electrons in water irradiated by photons with energies up to 1GeV.

    PubMed

    Todo, A S; Hiromoto, G; Turner, J E; Hamm, R N; Wright, H A

    1982-12-01

    Previous calculations of the initial energies of electrons produced in water irradiated by photons are extended to 1 GeV by including pair and triplet production. Calculations were performed with the Monte Carlo computer code PHOEL-3, which replaces the earlier code, PHOEL-2. Tables of initial electron energies are presented for single interactions of monoenergetic photons at a number of energies from 10 keV to 1 GeV. These tables can be used to compute kerma in water irradiated by photons with arbitrary energy spectra to 1 GeV. In addition, separate tables of Compton-and pair-electron spectra are given over this energy range. The code PHOEL-3 is available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830. PMID:7152948

  3. A Monte Carlo evaluation of dose enhancement by cisplatin and titanocene dichloride chemotherapy drugs in brachytherapy with photon emitting sources.

    PubMed

    Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney

    2014-06-01

    Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342

  4. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  5. Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernández, Rodrigo; Ott, Christian D.

    2015-11-01

    We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.

  6. Anisotropy collision effect on ion transport in cold gas discharges with Monte Carlo simulation

    SciTech Connect

    Hennad, A.; Yousfi, M.

    1995-12-31

    Ion-molecule collision cross sections and transport and reaction coefficients are one of the basic data needed for discharge modelling of non thermal cold plasmas. In the literature, numerous methods are devoted to the experimental and theoretical determination of these basic data. However, data on ion-molecule collision cross sections are very sparse and in certain case practically not existent for low and intermediate ion energy range. So, the aim of this communication is to give, in the case of two ions in their parent gases (N{sub 2}{sup +}/N{sub 2} and O{sub 2}{sup +}/O{sub 2}), the set of collision cross sections involving momentum transfer, symmetric charge transfer and also inelastic (vibration and ionisation) cross sections. The differential collision cross section is also given in order to take into account the strong anisotropy effect of elastic collisions of ions which are scattered mainly in the forward direction at the intermediate energy range. The differential cross sections are full calculated from interaction potential of polarization at low energy range and potentials of Lennard-Jones for N{sub 2}{sup +}/N{sub 2} and a modified form for O{sub 2}{sup +}/O{sub 2} at upper energy and then, by using a swarm unfolding technique, they are fitted until to obtain the best agreement between the transport and reaction coefficients measured from classical swarm experiments and calculated from Monte Carlo simulation of ion transport for a large range of reduced electric field E/N.

  7. Experimental validation of a coupled neutron-photon inverse radiation transport solver

    NASA Astrophysics Data System (ADS)

    Mattingly, John; Mitchell, Dean J.; Harding, Lee T.

    2011-10-01

    Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of ?-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.

  8. Suppression of population transport and control of exciton distributions by entangled photons

    NASA Astrophysics Data System (ADS)

    Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul

    2013-04-01

    Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty.

  9. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  10. SU-E-T-142: Effect of the Bone Heterogeneity On the Unflattened and Flattened Photon Beam Dosimetry: A Monte Carlo Comparison

    SciTech Connect

    Chow, J; Owrangi, A

    2014-06-01

    Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.

  11. Proton transport in water and DNA components: A Geant4 Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Champion, C.; Incerti, S.; Tran, H. N.; Karamitros, M.; Shin, J. I.; Lee, S. B.; Lekadir, H.; Bernal, M.; Francis, Z.; Ivanchenko, V.; Fojón, O. A.; Hanssen, J.; Rivarola, R. D.

    2013-07-01

    Accurate modeling of DNA damages resulting from ionizing radiation remains a challenge of today's radiobiology research. An original set of physics processes has been recently developed for modeling the detailed transport of protons and neutral hydrogen atoms in liquid water and in DNA nucleobases using the Geant4-DNA extension of the open source Geant4 Monte Carlo simulation toolkit. The theoretical cross sections as well as the mean energy transfers during the different ionizing processes were taken from recent works based on classical as well as quantum mechanical predictions. Furthermore, in order to compare energy deposition patterns in liquid water and DNA material, we here propose a simplified cellular nucleus model made of spherical voxels, each containing randomly oriented nanometer-size cylindrical targets filled with either liquid water or DNA material (DNA nucleobases) both with a density of 1 g/cm3. These cylindrical volumes have dimensions comparable to genetic material units of mammalian cells, namely, 25 nm (diameter) × 25 nm (height) for chromatin fiber segments, 10 nm (d) × 5 nm (h) for nucleosomes and 2 nm (d) × 2 nm (h) for DNA segments. Frequencies of energy deposition in the cylindrical targets are presented and discussed.

  12. A Monte-Carlo Model of Neutral-Particle Transport in Diverted Plasmas

    NASA Astrophysics Data System (ADS)

    Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

    1982-05-01

    The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall-reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudocollision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum-transfer rates, energy-transfer rates, and wall-sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

  13. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries.

    PubMed

    Majaron, Boris; Milanič, Matija; Premru, Jan

    2015-01-01

    In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries. PMID:25604544

  14. Poster — Thur Eve — 48: Dosimetric dependence on bone backscatter in orthovoltage radiotherapy: A Monte Carlo photon fluence spectral study

    SciTech Connect

    Chow, J; Grigor, G

    2014-08-15

    This study investigated dosimetric impact due to the bone backscatter in orthovoltage radiotherapy. Monte Carlo simulations were used to calculate depth doses and photon fluence spectra using the EGSnrc-based code. Inhomogeneous bone phantom containing a thin water layer (1–3 mm) on top of a bone (1 cm) to mimic the treatment sites of forehead, chest wall and kneecap was irradiated by the 220 kVp photon beam produced by the Gulmay D3225 x-ray machine. Percentage depth doses and photon energy spectra were determined using Monte Carlo simulations. Results of percentage depth doses showed that the maximum bone dose was about 210–230% larger than the surface dose in the phantoms with different water thicknesses. Surface dose was found to be increased from 2.3 to 3.5%, when the distance between the phantom surface and bone was increased from 1 to 3 mm. This increase of surface dose on top of a bone was due to the increase of photon fluence intensity, resulting from the bone backscatter in the energy range of 30 – 120 keV, when the water thickness was increased. This was also supported by the increase of the intensity of the photon energy spectral curves at the phantom and bone surface as the water thickness was increased. It is concluded that if the bone inhomogeneity during the dose prescription in the sites of forehead, chest wall and kneecap with soft tissue thickness = 1–3 mm is not considered, there would be an uncertainty in the dose delivery.

  15. Consequences of removing the flattening filter from linear accelerators in generating high dose rate photon beams for clinical applications: A Monte Carlo study verified by measurement

    NASA Astrophysics Data System (ADS)

    Ishmael Parsai, E.; Pearson, David; Kvale, Thomas

    2007-08-01

    An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.

  16. ITS Version 4.0: Electron/photon Monte Carlo transport codes

    SciTech Connect

    Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.

    1995-07-01

    The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.

  17. Comparison of the measured thermal neutron beam characteristics at the Lujan center with Monte Carlo transport calculations

    NASA Astrophysics Data System (ADS)

    Muhrer, G.; Pitcher, E. J.; Russell, G. J.; Ino, T.; Ooi, M.; Kiyanagi, Y.

    2004-07-01

    In an effort to characterize the moderators at the Manuel Lujan Center spallation neutron source, Los Alamos, USA, we measured the thermal spectrum and the absolute thermal flux on several of the beamlines. In a second step, we then compared these measurements to results gained from Monte Carlo transport calculations which simulated these experiments. In this paper, we will present the comparison of the measurements with these simulations.

  18. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Monte—Carlo Simulation of Multiple-Molecular-Motor Transport

    NASA Astrophysics Data System (ADS)

    Wang, Zi-Qing; Wang, Guo-Dong; Shen, Wei-Bo

    2010-10-01

    Multimotor transport is studied by Monte-Carlo simulation with consideration of motor detachment from the filament. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor numbers depending on the single motor force-velocity curve. The stall force and run-length reduced greatly compared to other models. Especially in the case of low ATP concentrations, the stall force of multi motor transport even smaller than the single motor's stall force.

  19. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads

    NASA Astrophysics Data System (ADS)

    Souza, Fabrício M.; Carrara, Thiago L.; Vernek, E.

    2011-09-01

    We investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using nonequilibrium Green’s function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect.

  20. Update on the Status of the FLUKA Monte Carlo Transport Code

    NASA Technical Reports Server (NTRS)

    Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.

    2004-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.

  1. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels

    2015-03-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  2. Predicting the timing properties of phosphor-coated scintillators using Monte Carlo light transport simulation.

    PubMed

    Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R

    2014-04-21

    Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727

  3. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  4. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  5. Monte Carlo simulation of ion transport of the high strain ionomer with conducting powder electrodes

    NASA Astrophysics Data System (ADS)

    He, Xingxi; Leo, Donald J.

    2007-04-01

    The transport of charge due to electric stimulus is the primary mechanism of actuation for a class of polymeric active materials known as ionomeric polymer transducers (IPT). At low frequency, strain response is strongly related to charge accumulation at the electrodes. Experimental results demonstrated using conducting powder, such as single-walled carbon nanotubes (SWNT), polyaniline (PANI) powders, high surface area RuO II, carbon black electrodes etc. as an electrode increases the mechanical deformation of the IPT by increasing the capacitance of the material. In this paper, Monte Carlo simulation of a two-dimensional ion hopping model has been built to describe ion transport in the IPT. The shape of the conducting powder is assumed to be a sphere. A step voltage is applied between the electrodes of the IPT, causing the thermally-activated hopping between multiwell energy structures. Energy barrier height includes three parts: the energy height due to the external electric potential, intrinsic energy, and the energy height due to ion interactions. Finite element method software-ANSYS is employed to calculate the static electric potential distribution inside the material with the powder sphere in varied locations. The interaction between ions and the electrodes including powder electrodes is determined by using the method of images. At each simulation step, the energy of each cation is updated to compute ion hopping rate which directly relates to the probability of an ion moving to its neighboring site. Simulation ends when the current drops to constant zero. Periodic boundary conditions are applied when ions hop in the direction perpendicular to the external electric field. When an ion is moved out of the simulation region, its corresponding periodic replica enters from the opposite side. In the direction of the external electric field, parallel programming is achieved in C augmented with functions that perform message-passing between processors using Message Passing Interface (MPI) standard. The effects of conducting powder size, locations and amount are discussed by studying the stationary charge density plots and ion distribution plots.

  6. Controllable single-photon transport between remote coupled-cavity arrays

    NASA Astrophysics Data System (ADS)

    Qin, Wei; Nori, Franco

    2016-03-01

    We develop an approach for controllable single-photon transport between two remote one-dimensional coupled-cavity arrays, used as quantum registers, mediated by an additional one-dimensional coupled-cavity array, acting as a quantum channel. A single two-level atom located inside one cavity of the intermediate channel is used to control the long-range coherent quantum coupling between two remote registers, thereby functioning as a quantum switch. With a time-independent perturbative treatment, we find that the leakage of quantum information can in principle be made arbitrarily small. Furthermore, our method can be extended to realize a quantum router in multiregister quantum networks, where single-photons can be either stored in one of the registers or transported to another on demand. These results are confirmed by numerical simulations.

  7. Unidirectional transport in electronic and photonic Weyl materials by Dirac mass engineering

    NASA Astrophysics Data System (ADS)

    Bi, Ren; Wang, Zhong

    2015-12-01

    Unidirectional transports have been observed in two-dimensional systems, however, so far they have not been experimentally observed in three-dimensional bulk materials. In this theoretical work, we show that the recently discovered Weyl materials provide a platform for unidirectional transports inside bulk materials. With high experimental feasibility, a complex Dirac mass can be generated and manipulated in photonic Weyl crystals, creating unidirectionally propagating modes observable in transmission experiments. A possible realization in (electronic) Weyl semimetals is also studied. We show in a lattice model that, with a short-range interaction, the desired form of the Dirac mass can be spontaneously generated in a first-order transition.

  8. Photonics

    NASA Astrophysics Data System (ADS)

    Hiruma, Teruo

    1993-04-01

    After developing various kinds of photodetectors such as phototubes, photomultiplier tubes, image pick up tubes, solid state photodetectors and a variety of light sources, we also started to develop integrated systems utilizing new detectors or imaging devices. These led us to the technology for a single photon counting imaging and detection of picosecond and femtosecond phenomena. Through those experiences, we gained the understanding that photon is a paste of substances, and yet we know so little about photon. By developing various technology for many fields such as analytical chemistry, high energy physics, medicine, biology, brain science, astronomy, etc., we are beginning to understand that the mind and life are based on the same matter, that is substance. Since humankind has so little knowledge about the substance concerning the mind and life, this makes some confusion on these subjects at this moment. If we explore photonics more deeply, many problems we now have in the world could be solved. By creating new knowledge and technology, I believe we will be able to solve the problems of illness, aging, energy, environment, human capability, and finally, the essential healthiness of the six billion human beings in the world.

  9. Using FLUKA Monte Carlo transport code to develop parameterizations for fluence and energy deposition data for high-energy heavy charged particles

    NASA Astrophysics Data System (ADS)

    Brittingham, John; Townsend, Lawrence; Barzilla, Janet; Lee, Kerry

    2012-03-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared to well-known data and the calculations of other deterministic and Monte Carlo codes. Results will be presented.

  10. A generalized framework for in-line energy deposition during steady-state Monte Carlo radiation transport

    SciTech Connect

    Griesheimer, D. P.; Stedry, M. H.

    2013-07-01

    A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)

  11. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    NASA Astrophysics Data System (ADS)

    Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.

    2016-03-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.

  12. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    SciTech Connect

    Müller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  13. Monte Carlo transport calculations and analysis for reactor pressure vessel neutron fluence

    SciTech Connect

    Wagner, J.C.; Haghighat, A.; Petrovic, B.G.

    1996-06-01

    The application of Monte Carlo methods for reactor pressure vessel (RPV) neutron fluence calculations is examined. As many commercial nuclear light water reactors approach the end of their design lifetime, it is of great consequence that reactor operators and regulators be able to characterize the structural integrity of the RPV accurately for financial reasons, as well as safety reasons, due to the possibility of plant life extensions. The Monte Carlo method, which offers explicit three-dimensional geometric representation and continuous energy and angular simulation, is well suited for this task. A model of the Three Mile Island unit 1 reactor is presented for determination of RPV fluence; Monte Carlo (MCNP) and deterministic (DORT) results are compared for this application; and numerous issues related to performing these calculations are examined. Synthesized three-dimensional deterministic models are observed to produce results that are comparable to those of Monte Carlo methods, provided the two methods utilize the same cross-section libraries. Continuous energy Monte Carlo methods are shown to predict more (15 to 20%) high-energy neutrons in the RPV than deterministic methods.

  14. Monte Carlo study of the energy response and depth dose water equivalence of the MOSkin radiation dosimeter at clinical kilovoltage photon energies.

    PubMed

    Lian, C P L; Othman, M A R; Cutajar, D; Butson, M; Guatelli, S; Rosenfeld, A B

    2011-06-01

    Skin dose is often the quantity of interest for radiological protection, as the skin is the organ that receives maximum dose during kilovoltage X-ray irradiations. The purpose of this study was to simulate the energy response and the depth dose water equivalence of the MOSkin radiation detector (Centre for Medical Radiation Physics (CMRP), University of Wollongong, Australia), a MOSFET-based radiation sensor with a novel packaging design, at clinical kilovoltage photon energies typically used for superficial/orthovoltage therapy and X-ray CT imaging. Monte Carlo simulations by means of the Geant4 toolkit were employed to investigate the energy response of the CMRP MOSkin dosimeter on the surface of the phantom, and at various depths ranging from 0 to 6 cm in a 30 × 30 × 20 cm water phantom. By varying the thickness of the tissue-equivalent packaging, and by adding thin metallic foils to the existing design, the dose enhancement effect of the MOSkin dosimeter at low photon energies was successfully quantified. For a 5 mm diameter photon source, it was found that the MOSkin was water equivalent to within 3% at shallow depths less than 15 mm. It is recommended that for depths larger than 15 mm, the appropriate depth dose water equivalent correction factors be applied to the MOSkin at the relevant depths if this detector is to be used for depth dose assessments. This study has shown that the Geant4 Monte Carlo toolkit is useful for characterising the surface energy response and depth dose behaviour of the MOSkin. PMID:21559885

  15. Two-photon transport through a waveguide coupling to a whispering-gallery resonator containing an atom and photon-blockade effect

    NASA Astrophysics Data System (ADS)

    Shi, T.; Fan, Shanhui

    2013-06-01

    We investigate the two-photon transport through a waveguide side coupling to a whispering-gallery-atom system. Using the Lehmann-Symanzik-Zimmermann reduction approach, we present the general formula for the two-photon processes including the two-photon scattering matrices, the wave functions, and the second order correlation functions of the outgoing photons. Based on the exact results of the second order correlation functions, we analyze the quantum statistics behaviors of the outgoing photons for two different cases: (a) the ideal case without the intermodal coupling in the whispering-gallery resonator; and (b) the case in the presence of the intermodal coupling which leads to more complex nonlinear behavior. In the ideal case, we show that the system consists of two independent scattering pathways, a free pathway by a cavity mode without atomic excitation, and a “Jaynes-Cummings” pathway described by the Jaynes-Cummings Hamiltonian of a single-mode cavity coupling to an atom. The presence of the free pathway leads to two-photon correlation properties that are distinctively different from the standard Jaynes-Cummings model, in both the strong and weak-coupling regime. In the presence of intermodal mixing, the system no longer exhibits a free resonant pathway. Instead, both the single-photon and the two-photon transport properties depend on the position of the atom. Thus, in the presence of intermodal mixing, one can in fact tune the photon correlation properties by changing the position of the atom. Our formalism can be used to treat resonator and cavity dissipation as well.

  16. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir Alexander

    Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle differential basis with common cross-section libraries.

  17. Monte-Carlo-derived insights into dose-kerma-collision kerma inter-relationships for 50?keV-25?MeV photon beams in water, aluminum and copper

    NASA Astrophysics Data System (ADS)

    Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.

    2015-01-01

    The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in sub-equilibrium small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance upstream from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with exact MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values.

  18. Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads

    NASA Astrophysics Data System (ADS)

    Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson

    2012-02-01

    Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).

  19. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting

    NASA Astrophysics Data System (ADS)

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin

    2015-10-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.

  20. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting.

    PubMed

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A; Choi, Kyoung-Shin

    2015-01-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (?2.5?eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350?C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ?0.2?eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984

  1. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting

    PubMed Central

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin

    2015-01-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (∼2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ∼0.2 eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984

  2. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  3. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  4. Use of single scatter electron monte carlo transport for medical radiation sciences

    DOEpatents

    Svatos, Michelle M.

    2001-01-01

    The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

  5. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling

    SciTech Connect

    Schaefer, C.; Jansen, A. P. J.

    2013-02-07

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

  6. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study

    SciTech Connect

    Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad

    2015-02-24

    Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

  7. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad

    2015-02-01

    Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

  8. Topological aspects in the photonic-crystal analog of single-particle transport in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Esposito, Luca; Gerace, Dario

    2013-07-01

    We present a perturbative approach to derive the semiclassical equations of motion for the two-dimensional electron dynamics under the simultaneous presence of static electric and magnetic fields, where the quantized Hall conductance is known to be directly related to the topological properties of translationally invariant magnetic Bloch bands. In close analogy to this approach, we develop a perturbative theory of two-dimensional photonic transport in gyrotropic photonic crystals to mimic the physics of quantum Hall systems. We show that a suitable permittivity grading of a gyrotropic photonic crystal is able to simulate the simultaneous presence of analog electric and magnetic field forces for photons, and we rigorously derive the topology-related term in the equation for the electromagnetic energy velocity that is formally equivalent to the electronic case. A possible experimental configuration is proposed to observe a bulk photonic analog to the quantum Hall physics in graded gyromagnetic photonic crystals.

  9. The effect of energy weighting on x-ray imaging based on photon counting detector: a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

    2012-03-01

    Photon counting detector based on semiconductor materials is a promising imaging modality and provides many benefits for x-ray imaging compared with conventional detectors. This detector is able to measure the x-ray photon energy deposited by each event and provide the x-ray spectrum formed by detected photon. Recently, photon counting detectors have been developed for x-ray imaging. However, there has not been done many works for developing the novel x-ray imaging techniques and evaluating the image quality in x-ray system based on photon counting detectors. In this study, we simulated computed tomography (CT) images using projection-based and image-based energy weighting techniques and evaluate the effect of energy weighting in CT images. We designed the x-ray CT system equipped with cadmium telluride (CdTe) detector operating in the photon counting mode using Geant4 Application for Tomographic Emission (GATE) simulation. A micro focus X-ray source was modeled to reduce the flux of photons and minimize the spectral distortion. The phantom had a cylindrical shape of 30 mm diameter and consisted of ploymethylmethacrylate (PMMA) which includes the blood (1.06 g/cm3), iodine, and gadolinium (50 mg/cm3). The reconstructed images of phantom were acquired with projection-based and image-based energy weighting techniques. To evaluate the image quality, the contrast-to-noise ratio (CNR) is calculated as a function of the number of energy-bins. The CNR of both images acquired with energy weighting techniques were improved compared with those of integrating and counting images and increased as a function of the number of energy-bins. When the number of energy-bins was increased, the CNR in the image-based energy weighting image is higher than the projection-based energy weighting image. The results of this study show that the energy weighting techniques based on the photon counting detector can improve the image quality and the number of energy-bins used for generating the image is important.

  10. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

    2010-01-01

    The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

  11. Far-field pattern simulation of flip-chip bonded power light-emitting diodes by a Monte Carlo photon-tracing method

    NASA Astrophysics Data System (ADS)

    Hu, Fei; Qian, Ke-Yuan; Luo, Yi

    2005-05-01

    The far-field pattern of light-emitting diodes (LEDs) is an important issue in practical applications. We used a Monte Carlo photon-tracing method for the package design of flip-chip bonded power LEDs. As a first-order approximation, we propose using a plane light source model to calculate the far-field pattern of encapsulated LEDs. The far-field pattern is also studied by use of a more detailed model, which takes the structure of all epitaxial layers of a flip-chip bonded power LED into consideration. By comparing the simulation results with the experimental data, we have concluded that the plane light source model is much less time-consuming and offers fairly good precision for package design.

  12. Far-field pattern simulation of flip-chip bonded power light-emitting diodes by a Monte Carlo photon-tracing method.

    PubMed

    Hu, Fei; Qian, Ke-yuan; Luo, Yi

    2005-05-10

    The far-field pattern of light-emitting diodes (LEDs) is an important issue in practical applications. We used a Monte Carlo photon-tracing method for the package design of flip-chip bonded power LEDs. As a first-order approximation, we propose using a plane light source model to calculate the far-field pattern of encapsulated LEDs. The far-field pattern is also studied by use of a more detailed model, which takes the structure of all epitaxial layers of a flip-chip bonded power LED into consideration. By comparing the simulation results with the experimental data, we have concluded that the plane light source model is much less time-consuming and offers fairly good precision for package design. PMID:15943328

  13. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids

    NASA Astrophysics Data System (ADS)

    Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.

    2015-04-01

    We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.

  14. Modeling Positron Transport in Gaseous and Soft-condensed Systems with Kinetic Theory and Monte Carlo

    NASA Astrophysics Data System (ADS)

    Boyle, G.; Tattersall, W.; Robson, R. E.; White, Ron; Dujko, S.; Petrovic, Z. Lj.; Brunger, M. J.; Sullivan, J. P.; Buckman, S. J.; Garcia, G.

    2013-09-01

    An accurate quantitative understanding of the behavior of positrons in gaseous and soft-condensed systems is important for many technological applications as well as to fundamental physics research. Optimizing Positron Emission Tomography (PET) technology and understanding the associated radiation damage requires knowledge of how positrons interact with matter prior to annihilation. Modeling techniques developed for electrons can also be employed to model positrons, and these techniques can also be extended to account for the structural properties of the medium. Two complementary approaches have been implemented in the present work: kinetic theory and Monte Carlo simulations. Kinetic theory is based on the multi-term Boltzmann equation, which has recently been modified to include the positron-specific interaction processes of annihilation and positronium formation. Simultaneously, a Monte Carlo simulation code has been developed that can likewise incorporate positron-specific processes. Funding support from ARC (CoE and DP schemes).

  15. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors.

    PubMed

    Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy

    2015-07-28

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114

  16. Thermal Scattering Law Data: Implementation and Testing Using the Monte Carlo Neutron Transport Codes COG, MCNP and TART

    SciTech Connect

    Cullen, D E; Hansen, L F; Lent, E M; Plechaty, E F

    2003-05-17

    Recently we implemented the ENDF/B-VI thermal scattering law data in our neutron transport codes COG and TART. Our objective was to convert the existing ENDF/B data into double differential form in the Livermore ENDL format. This will allow us to use the ENDF/B data in any neutron transport code, be it a Monte Carlo, or deterministic code. This was approached as a multi-step project. The first step was to develop methods to directly use the thermal scattering law data in our Monte Carlo codes. The next step was to convert the data to double-differential form. The last step was to verify that the results obtained using the data directly are essentially the same as the results obtained using the double differential data. Part of the planned verification was intended to insure that the data as finally implemented in the COG and TART codes, gave the same answer as the well known MCNP code, which includes thermal scattering law data. Limitations in the treatment of thermal scattering law data in MCNP have been uncovered that prevented us from performing this part of our verification.

  17. Program EPICP: Electron photon interaction code, photon test module. Version 94.2

    SciTech Connect

    Cullen, D.E.

    1994-09-01

    The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.

  18. Use of Transportable Radiation Detection Instruments to Assess Internal Contamination From Intakes of Radionuclides Part I: Field Tests and Monte Carlo Simulations.

    PubMed

    Anigstein, Robert; Erdman, Michael C; Ansari, Armin

    2016-06-01

    The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229

  19. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)

  20. Tests of the Monte Carlo simulation of the photon-tagger focal-plane electronics at the MAX IV Laboratory

    NASA Astrophysics Data System (ADS)

    Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.

    2014-04-01

    Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

  1. Monte Carlo evaluation of the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area.

    PubMed

    Chiavassa, Sophie; Buge, François; Hervé, Chloé; Delpon, Gregory; Rigaud, Jérome; Lisbona, Albert; Supiot, Sthéphane

    2015-12-01

    The aim of this study was to evaluate the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy (IORT) in pelvic area. A GATE Monte Carlo model of the INTRABEAM® was adapted for the study. Simulations were performed in the CT scan of a cadaver considering a homogeneous segmentation (water) and an inhomogeneous segmentation (5 tissues from ICRU44). Measurements were performed in the cadaver using EBT3 Gafchromic® films. Impact of inhomogeneities on dose calculation in cadaver was 6% for soft tissues and greater than 300% for bone tissues. EBT3 measurements showed a better agreement with calculation for inhomogeneous media. However, dose discrepancy in soft tissues led to a sub-millimeter (0.65 mm) shift in the effective point dose in depth. Except for bone tissues, the effect of inhomogeneities on dose calculation for low energy photons intra-operative radiation therapy in pelvic area was not significant for the studied anatomy. PMID:26420445

  2. Monte Carlo simulation studies of spin transport in graphene armchair nanoribbons

    NASA Astrophysics Data System (ADS)

    Salimath, Akshay Kumar; Ghosh, Bahniman

    2014-10-01

    The research in the area of spintronics is gaining momentum due to the promise spintronics based devices have shown. Since spin degree of freedom of an electron is used to store and process information, spintronics can provide numerous advantages over conventional electronics by providing new functionalities. In this article, we study spin relaxation in graphene nanoribbons (GNR) of armchair type by employing semiclassical Monte Carlo approach. D'yakonov-Perel' relaxation due to structural inversion asymmetry (Rashba spin-orbit coupling) and Elliott-Yafet (EY) relaxation cause spin dephasing in armchair graphene nanoribbons. We investigate spin relaxation in α-,β- and γ-armchair GNR with varying width and temperature.

  3. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons

    SciTech Connect

    Glaser, A; Zhang, R; Gladstone, D; Pogue, B

    2014-06-01

    Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.

  4. Influence of photon energy spectra from brachytherapy sources on Monte Carlo simulations of kerma and dose rates in water and air

    SciTech Connect

    Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo

    2010-02-15

    Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.

  5. Decoupling initial electron beam parameters for Monte Carlo photon beam modelling by removing beam-modifying filters from the beam path

    NASA Astrophysics Data System (ADS)

    DeSmedt, B.; Reynaert, N.; Flachet, F.; Coghe, M.; Thompson, M. G.; Paelinck, L.; Pittomvils, G.; DeWagter, C.; DeNeve, W.; Thierens, H.

    2005-12-01

    A new method is presented to decouple the parameters of the incident e- beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening filter are removed from the beam path. This leads to lateral dose-profile curves, which depend only on the mean energy of the incident electron beam, since the effect of the radial intensity distribution of the incident e- beam is negligible when both filters are absent. The location of the primary collimator and the thickness and density of the target are not considered as adjustable parameters, since a satisfactory working Monte Carlo model is obtained for the low energy photon beam (6 MV) of the linac using the same target and primary collimator. This method was applied to conclude that the mean energy of the incident e- beam for the high energy photon beam (18 MV) of our Elekta SLi Plus linac is equal to 14.9 MeV. After optimizing the mean energy, the modelling of the filters, in accordance with the information provided by the manufacturer, can be verified by positioning only one filter in the linac head while the other is removed. It is also demonstrated that the parameter setting for Bremsstrahlung angular sampling in BEAMnrc ('Simple' using the leading term of the Koch and Motz equation or 'KM' using the full equation) leads to different dose-profile curves for the same incident electron energy for the studied 18 MV beam. It is therefore important to perform the calculations in 'KM' mode. Note that both filters are not physically removed from the linac head. All filters remain present in the linac head and are only rotated out of the beam. This makes the described method applicable for practical usage since no recommissioning process is required.

  6. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    SciTech Connect

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.

  7. A Monte Carlo study of electron-hole scattering and steady-state minority-electron transport in GaAs

    NASA Astrophysics Data System (ADS)

    Sadra, K.; Maziar, C. M.; Streetman, B. G.; Tang, D. S.

    1988-11-01

    We report the first bipolar Monte Carlo calculations of steady-state minority-electron transport in room-temperature p-GaAs including multiband electron-hole scattering with and without hole overlap factors. Our results show how such processes, which make a significant contribution to the minority-electron energy loss rate, can affect steady-state minority-electron transport. Furthermore, we discuss several other issues which we believe should be investigated before present Monte Carlo treatments of electron-hole scattering can provide quantitative information.

  8. From force-fields to photons: MD simulations of dye-labeled nucleic acids and Monte Carlo modeling of FRET

    NASA Astrophysics Data System (ADS)

    Goldner, Lori

    2012-02-01

    Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.

  9. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  10. Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks

    SciTech Connect

    Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit; Boozer, Allen H.

    2012-04-11

    Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta#14;f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear #14;B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.

  11. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition

    SciTech Connect

    Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.

  12. An improved empirical approach to introduce quantization effects in the transport direction in multi-subband Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Palestri, P.; Lucci, L.; Dei Tos, S.; Esseni, D.; Selmi, L.

    2010-05-01

    In this paper we propose and validate a simple approach to empirically account for quantum effects in the transport direction of MOS transistors (i.e. source and drain tunneling and delocalized nature of the carrier wavepacket) in multi-subband Monte Carlo simulators, that already account for quantization in the direction normal to the semiconductor-oxide interface by solving the 1D Schrödinger equation in each section of the device. The model has been validated and calibrated against ballistic non-equilibrium Green's function simulations over a wide range of gate lengths, voltage biases and temperatures. The proposed model has just one adjustable parameter and our results show that it can achieve a good agreement with the NEGF approach.

  13. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations

    NASA Astrophysics Data System (ADS)

    Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.

    2015-12-01

    The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.

  14. Monte Carlo simulation of vapor transport in physical vapor deposition of titanium

    SciTech Connect

    Balakrishnan, Jitendra; Boyd, Iain D.; Braun, David G.

    2000-05-01

    In this work, the direct simulation Monte Carlo (DSMC) method is used to model the physical vapor deposition of titanium using electron-beam evaporation. Titanium atoms are vaporized from a molten pool at a very high temperature and are accelerated collisionally to the deposition surface. The electronic excitation of the vapor is significant at the temperatures of interest. Energy transfer between the electronic and translational modes of energy affects the flow significantly. The electronic energy is modeled in the DSMC method and comparisons are made between simulations in which electronic energy is excluded from and included among the energy modes of particles. The experimentally measured deposition profile is also compared to the results of the simulations. It is concluded that electronic energy is an important factor to consider in the modeling of flows of this nature. The simulation results show good agreement with experimental data. (c) 2000 American Vacuum Society.

  15. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone

    NASA Astrophysics Data System (ADS)

    Lodwick, Camille J.

    This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

  16. Ionization chamber dosimetry of small photon fields: a Monte Carlo study on stopping-power ratios for radiosurgery and IMRT beams

    NASA Astrophysics Data System (ADS)

    Sánchez-Doblado, F.; Andreo, P.; Capote, R.; Leal, A.; Perucha, M.; Arráns, R.; Núñez, L.; Mainegra, E.; Lagares, J. I.; Carrasco, E.

    2003-07-01

    Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 × 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (Delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 × 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated sw,air values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, sw,air agrees within 0.1% with the value for 10 × 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in sw,air was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought.

  17. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    SciTech Connect

    Buchan, A.G.; Calloo, A.A.; Goffin, M.G.; Dargaville, S.; Fang, F.; Pain, C.C.; Navon, I.M.

    2015-09-01

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.

  18. Graphics processing unit parallel accelerated solution of the discrete ordinates for photon transport in biological tissues

    NASA Astrophysics Data System (ADS)

    Peng, Kuan; Gao, Xinbo; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; He, Xiaowei; Wang, Xiaorei; Liang, Jimin; Tian, Jie

    2011-07-01

    As a widely used numerical solution for the radiation transport equation (RTE), the discrete ordinates can predict the propagation of photons through biological tissues more accurately relative to the diffusion equation. The discrete ordinates reduce the RTE to a serial of differential equations that can be solved by source iteration (SI). However, the tremendous time consumption of SI, which is partly caused by the expensive computation of each SI step, limits its applications. In this paper, we present a graphics processing unit (GPU) parallel accelerated SI method for discrete ordinates. Utilizing the calculation independence on the levels of the discrete ordinate equation and spatial element, the proposed method reduces the time cost of each SI step by parallel calculation. The photon reflection at the boundary was calculated based on the results of the last SI step to ensure the calculation independence on the level of the discrete ordinate equation. An element sweeping strategy was proposed to detect the calculation independence on the level of the spatial element. A GPU parallel frame called the compute unified device architecture was employed to carry out the parallel computation. The simulation experiments, which were carried out with a cylindrical phantom and numerical mouse, indicated that the time cost of each SI step can be reduced up to a factor of 228 by the proposed method with a GTX 260 graphics card.

  19. Prediction of ceramic stereolithography resin sensitivity from theory and measurement of diffusive photon transport

    NASA Astrophysics Data System (ADS)

    Wu, K. C.; Seefeldt, K. F.; Solomon, M. J.; Halloran, J. W.

    2005-07-01

    A general, quantitative relationship between the photon-transport mean free path (l*) and resin sensitivity (DP) in multiple-scattering alumina/monomer suspensions formulated for ceramic stereolithography is presented and experimentally demonstrated. A Mie-theory-based computational method with structure factor contributions to determine l* was developed. Planar-source diffuse transmittance experiments were performed on monodisperse and bimodal polystyrene/water and alumina/monomer systems to validate this computational tool. The experimental data support the application of this l* calculation method to concentrated suspensions composed of nonaggregating particles of moderately aspherical shape and log-normal size distribution. The values of DP are shown to be approximately five times that of l* in the tested ceramic stereolithography suspensions.

  20. Correlated Cooper pair transport and microwave photon emission in the dynamical Coulomb blockade

    NASA Astrophysics Data System (ADS)

    Leppäkangas, Juha; Fogelström, Mikael; Marthaler, Michael; Johansson, Göran

    2016-01-01

    We study theoretically electromagnetic radiation emitted by inelastic Cooper-pair tunneling. We consider a dc-voltage-biased superconducting transmission line terminated by a Josephson junction. We show that the generated continuous-mode electromagnetic field can be expressed as a function of the time-dependent current across the Josephson junction. The leading-order expansion in the tunneling coupling, similar to the P (E ) theory, has previously been used to investigate the photon emission statistics in the limit of sequential (independent) Cooper-pair tunneling. By explicitly evaluating the system characteristics up to the fourth order in the tunneling coupling, we account for dynamics between consecutively tunneling Cooper pairs. Within this approach we investigate how temporal correlations in the charge transport can be seen in the first- and second-order coherences of the emitted microwave radiation.

  1. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh

    SciTech Connect

    Drumm, C.R.; Lorenz, J.

    1999-03-02

    A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers.

  2. Galerkin-based meshless methods for photon transport in the biological tissue.

    PubMed

    Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min

    2008-12-01

    As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170

  3. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam

    2015-09-01

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  4. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    SciTech Connect

    Yani, Sitti; Dirgayussa, I Gde E.; Haryanto, Freddy; Arif, Idam; Rhani, Moh. Fadhillah

    2015-09-30

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm{sup 3}, 1 × 1 × 0.5 cm{sup 3}, and 1 × 1 × 0.8 cm{sup 3}. The 1 × 10{sup 9} histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in d{sub max} from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm{sup 3} about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm{sup 3} about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  5. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport

    SciTech Connect

    Lee, C; Badal, A

    2014-06-15

    Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.

  6. Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.

    PubMed

    Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred

    2012-02-01

    Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596

  7. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering

    SciTech Connect

    Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang

    2011-07-15

    We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.

  8. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang

    2011-07-01

    We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.

  9. High-resolution monte carlo simulation of flow and conservative transport in heterogeneous porous media 1. Methodology and flow results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.

  10. A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES

    SciTech Connect

    Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu

    2013-11-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  11. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  12. A Monte Carlo Code for Relativistic Radiation Transport around Kerr Black Holes

    NASA Astrophysics Data System (ADS)

    Schnittman, Jeremy D.; Krolik, Julian H.

    2013-11-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  13. Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference

    NASA Astrophysics Data System (ADS)

    Marzouk, Y.; Parno, M.

    2014-12-01

    We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.

  14. Assessment of uncertainties in the lung activity measurement of low-energy photon emitters using Monte Carlo simulation of ICRP male thorax voxel phantom.

    PubMed

    Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S

    2015-12-01

    Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992

  15. Monte Carlo Study of the Effect of Collimator Thickness on T-99m Source Response in Single Photon Emission Computed Tomography

    PubMed Central

    Islamian, Jalil Pirayesh; Toossi, Mohammad Taghi Bahreyni; Momennezhad, Mahdi; Zakavi, Seyyed Rasoul; Sadeghi, Ramin; Ljungberg, Michael

    2012-01-01

    In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of 99mTc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters’ analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses. PMID:23372440

  16. The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)

    SciTech Connect

    Rhoades, W.A.; Simpson, D.B.

    1997-10-01

    TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.

  17. Comparison of Two Accelerators for Monte Carlo Radiation Transport Calculations, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p Coprocessor: A Case Study for X-ray CT Imaging Dose Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

    2014-06-01

    Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

  18. C 2-ray: A new method for photon-conserving transport of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Mellema, Garrelt; Iliev, Ilian T.; Alvarez, Marcelo A.; Shapiro, Paul R.

    2006-03-01

    We present a new numerical method for calculating the transfer of ionizing radiation, called C 2-ray (conservative, causal ray-tracing method). The transfer of ionizing radiation in diffuse gas presents a special challenge to most numerical methods which involve time- and spatial-differencing. Standard approaches to radiative transport require that grid cells must be small enough to be optically-thin while time steps are small enough that ionization fronts do not cross a cell in a single time step. This quickly becomes prohibitively expensive. We have developed an algorithm which overcomes these limitations and is, therefore, orders of magnitude more efficient. The method is explicitly photon-conserving, so the depletion of ionizing photons by bound-free opacity is guaranteed to equal the photoionizations these photons caused. As a result, grid cells can be large and very optically-thick without loss of accuracy. The method also uses an analytical relaxation solution for the ionization rate equations for each time step which can accommodate time steps which greatly exceed the characteristic ionization and ionization front crossing times. Together, these features make it possible to integrate the equation of transfer along a ray with many fewer cells and time steps than previous methods. For multi-dimensional calculations, the code utilizes short-characteristics ray tracing. The method scales as the product of the number of grid cells and the number of sources. C 2-ray is well-suited for coupling radiative transfer to gas and N-body dynamics methods, on both fixed and adaptive grids, without imposing additional limitations on the time step and grid spacing. We present several tests of the code involving propagation of ionization fronts in one and three dimensions, in both homogeneous and inhomogeneous density fields. We compare to analytical solutions for the ionization front position and velocity, some of which we derive here for the first time. As an illustration, we apply C 2-ray to simulate cosmic reionization in three-dimensional inhomogeneous cosmological density field. We also apply it to the problem of I-front trapping in a dense clump, using both a fixed and an adaptive grid.

  19. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport

    NASA Astrophysics Data System (ADS)

    Tautz, R. C.

    2016-05-01

    A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.

  20. Numerical modeling of photon migration in the cerebral cortex of the living rat using the radiative transport equation

    NASA Astrophysics Data System (ADS)

    Fujii, Hiroyuki; Okawa, Shinpei; Nadamoto, Ken; Okada, Eiji; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao

    2015-03-01

    Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.

  1. Elucidating the electron transport in semiconductors via Monte Carlo simulations: an inquiry-driven learning path for engineering undergraduates

    NASA Astrophysics Data System (ADS)

    Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio

    2015-09-01

    Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.

  2. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-15

    Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.

  3. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin.

    PubMed

    Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A

    2014-01-01

    Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839

  4. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin

    NASA Astrophysics Data System (ADS)

    Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.

    2014-09-01

    Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

  5. Design studies of volume-pumped photolytic systems using a photon transport code

    NASA Astrophysics Data System (ADS)

    Prelas, M. A.; Jones, G. L.

    1982-01-01

    The use of volume sources, such as nuclear pumping, presents some unique features in the design of photolytically driven systems (e.g., lasers). In systems such as these, for example, a large power deposition is not necessary. However, certain restrictions, such as self-absorption, limit the ability of photolytically driven systems to scale by volume. A photon transport computer program was developed at the University of Missouri-Columbia to study these limitations. The development of this code is important, perhaps necessary, for the design of photolytically driven systems. With the aid of this code, a photolytically driven iodine laser was designed for utilization with a 3He nuclear-pumped system with a TRIGA reactor as the neutron source. Calculations predict a peak power output of 0.37 kW. Using the same design, it is also anticipated that the system can achieve a 14-kW output using a fast burst-type reactor neutron source, and a 0.65-kW peak output using 0.1 Torr of the alpha emitter radon-220 as part of the fill. The latter would represent a truly portable laser system.

  6. An investigation of the depth dose in the build-up region, and surface dose for a 6-MV therapeutic photon beam: Monte Carlo simulation and measurements

    PubMed Central

    Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn

    2013-01-01

    The percentage depth dose in the build-up region and the surface dose for the 6-MV photon beam from a Varian Clinac 23EX medical linear accelerator was investigated for square field sizes of 5 × 5, 10 × 10, 15 × 15 and 20 × 20 cm2using the EGS4nrc Monte Carlo (MC) simulation package. The depth dose was found to change rapidly in the build-up region, and the percentage surface dose increased proportionally with the field size from approximately 10% to 30%. The measurements were also taken using four common detectors: TLD chips, PFD dosimeter, parallel-plate and cylindrical ionization chamber, and compared with MC simulated data, which served as the gold standard in our study. The surface doses obtained from each detector were derived from the extrapolation of the measured depth doses near the surface and were all found to be higher than that of the MC simulation. The lowest and highest over-responses in the surface dose measurement were found with the TLD chip and the CC13 cylindrical ionization chamber, respectively. Increasing the field size increased the percentage surface dose almost linearly in the various dosimeters and also in the MC simulation. Interestingly, the use of the CC13 ionization chamber eliminates the high gradient feature of the depth dose near the surface. The correction factors for the measured surface dose from each dosimeter for square field sizes of between 5 × 5 and 20 × 20 cm2are introduced. PMID:23104898

  7. A feasibility study to calculate unshielded fetal doses to pregnant patients in 6-MV photon treatments using Monte Carlo methods and anatomically realistic phantoms

    SciTech Connect

    Bednarz, Bryan; Xu, X. George

    2008-07-15

    A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.

  8. Study of water transport phenomena on cathode of PEMFCs using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Soontrapa, Karn

    This dissertation deals with the development of a three-dimensional computational model of water transport phenomena in the cathode catalyst layer (CCL) of PEMFCs. The catalyst layer in the numerical simulation was developed using the optimized sphere packing algorithm. The optimization technique named the adaptive random search technique (ARSET) was employed in this packing algorithm. The ARSET algorithm will generate the initial location of spheres and allow them to move in the random direction with the variable moving distance, randomly selected from the sampling range, based on the Lennard-jones potential of the current and new configuration. The solid fraction values obtained from this developed algorithm are in the range of 0.631 to 0.6384 while the actual processing time can significantly be reduced by 8% to 36% based on the number of spheres. The initial random number sampling range was investigated and the appropriate sampling range value is equal to 0.5. This numerically developed cathode catalyst layer has been used to simulate the diffusion processes of protons, in the form of hydronium, and oxygen molecules through the cathode catalyst layer. The movements of hydroniums and oxygen molecules are controlled by the random vectors and all of these moves has to obey the Lennard-Jones potential energy constrain. Chemical reaction between these two species will happen when they share the same neighborhood and result in the creation of water molecules. Like hydroniums and oxygen molecules, these newly-formed water molecules also diffuse through the cathode catalyst layer. It is important to investigate and study the distribution of hydronium oxygen molecule and water molecules during the diffusion process in order to understand the lifetime of the cathode catalyst layer. The effect of fuel flow rate on the water distribution has also been studied by varying the hydronium and oxygen molecule input. Based on the results of these simulations, the hydronium: oxygen input ratio of 3:2 has been found to be the best choice for this study. To study the effect of metal impurity and gas contamination on the cathode catalyst layer, the cathode catalyst layer structure is modified by adding the metal impurities and the gas contamination is introduced with the oxygen input. In this study, gas contamination has very little effect on the electrochemical reaction inside the cathode catalyst layer because this simulation is transient in nature and the percentage of the gas contamination is small, in the range of 0.0005% to 0.0015% for CO and 0.028% to 0.04% for CO2 . Metal impurities seem to have more effect on the performance of PEMFC because they not only change the structure of the developed cathode catalyst layer but also affect the movement of fuel and water product. Aluminum has the worst effect on the cathode catalyst layer structure because it yields the lowest amount of newly form water and the largest amount of trapped water product compared to iron of the same impurity percentage. For the iron impurity, it shows some positive effect on the life time of the cathode catalyst layer. At the 0.75 wt% of iron impurity, the amount of newly formed water is 6.59% lower than the pure carbon catalyst layer case but the amount of trapped water product is 11.64% lower than the pure catalyst layer. The lifetime of the impure cathode catalyst layer is longer than the pure one because the amount of water that is still trapped inside the pure cathode catalyst layer is higher than that of the impure one. Even though the impure cathode catalyst layer has a longer lifetime, it sacrifices the electrical power output because the electrochemical reaction occurrence inside the impure catalyst layer is lower.

  9. Overview of the MCU Monte Carlo Software Package

    NASA Astrophysics Data System (ADS)

    Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

    2014-06-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

  10. Technique for handling wave propagation specific effects in biological tissue: mapping of the photon transport equation to Maxwell's equations.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin; Paganin, David M; Hendahewa, Priyantha R D S

    2008-10-27

    A novel algorithm for mapping the photon transport equation (PTE) to Maxwell's equations is presented. Owing to its accuracy, wave propagation through biological tissue is modeled using the PTE. The mapping of the PTE to Maxwell's equations is required to model wave propagation through foreign structures implanted in biological tissue for sensing and characterization of tissue properties. The PTE solves for only the magnitude of the intensity but Maxwell's equations require the phase information as well. However, it is possible to construct the phase information approximately by solving the transport of intensity equation (TIE) using the full multigrid algorithm. PMID:18958061

  11. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    SciTech Connect

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

  12. The All Particle Monte Carlo method: Atomic data files

    SciTech Connect

    Rathkopf, J.A.; Cullen, D.E.; Perkins, S.T.

    1990-11-06

    Development of the All Particle Method, a project to simulate the transport of particles via the Monte Carlo method, has proceeded on two fronts: data collection and algorithm development. In this paper we report on the status of the data libraries. The data collection is nearly complete with the addition of electron, photon, and atomic data libraries to the existing neutron, gamma ray, and charged particle libraries. The contents of these libraries are summarized.

  13. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment

    SciTech Connect

    Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.

    2012-07-01

    Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

  14. Monte-Carlo simulation studies of the effect of temperature and diameter variation on spin transport in II-VI semiconductor nanowires

    NASA Astrophysics Data System (ADS)

    Chishti, Sabiq; Ghosh, Bahniman; Bishnoi, Bhupesh

    2015-02-01

    We have analyzed the spin transport behaviour of four II-VI semiconductor nanowires by simulating spin polarized transport using a semi-classical Monte-Carlo approach. The different scattering mechanisms considered are acoustic phonon scattering, surface roughness scattering, polar optical phonon scattering, and spin flip scattering. The II-VI materials used in our study are CdS, CdSe, ZnO and ZnS. The spin transport behaviour is first studied by varying the temperature (4-500 K) at a fixed diameter of 10 nm and also by varying the diameter (8-12 nm) at a fixed temperature of 300 K. For II-VI compounds, the dominant mechanism is for spin relaxation; D'yakonovPerel and Elliot Yafet have been actively employed in the first order model to simulate the spin transport. The dependence of the spin relaxation length (SRL) on the diameter and temperature has been analyzed.

  15. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  16. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  17. Resonant photon transport through metal-insulator-metal multilayers consisting of Ag and SiO2

    NASA Astrophysics Data System (ADS)

    Yoshida, Maiko; Tomita, Satoshi; Yanagi, Hisao; Hayashi, Shinji

    2010-07-01

    We have conducted experimental and numerical studies on resonant photon transport through Ag-SiO2-Ag multilayers with varying SiO2 gap-layer thickness due to its application toward the development of a metamaterial superlens. Photon-transport spectra that have been measured using a double-prism system with a p -polarized He-Ne laser show a resonant photon tunneling (RPT) peak in the total reflection region and an additional peak in the propagating region. Calculated dispersion curves and electric field profiles reveal that the RPT peak is brought about by antisymmetrically coupled surface-plasmon polaritons (SPPs), very similar to the long-range SPPs in a single-metal film. The additional peak, however, is caused by TM guided modes with symmetrically coupled SPPs. We demonstrate that the TM0 guided modes move continuously from the total reflecting region to the propagating region as the gap-layer thickness decreases. This will enable us to realize a device which converts evanescent waves into propagating waves of light, opening the possibility of an alternative type of hyperlens.

  18. Time-correlated photon-counting probe of singlet excitation transport and restricted rotation in Langmuir-Blodgett monolayers

    SciTech Connect

    Anfinrud, P.A.; Hart, D.E.; Struve, W.S.

    1988-07-14

    Fluorescence depolarization was monitored by time-correlated single-photon counting in organized monolayers of octadecylrhodamine B (ODRB) in dioleoylphosphatidylcholine (DOL) at air-water interfaces. At low ORDB density, the depolarization was dominated by restricted rotational diffusion. Increases in surface pressure reduced both the angular range and the diffusion constant for rotational motion. At higher ODRB densities, additional depolarization was observed due to electronic excitation transport. A two-dimensional two-particle theory developed by Baumann and Fayer was found to provide an excellent description of the transport dynamics for reduced chromophore densities up to /approximately/ 5.0. The testing of transport theories proves to be relatively insensitive to the orientational distribution assumed for the ODRB transition moments in their two-dimensional systems.

  19. The transport character of quantum state in one-dimensional coupled-cavity arrays: effect of the number of photons and entanglement degree

    NASA Astrophysics Data System (ADS)

    Ma, Shao-Qiang; Zhang, Guo-Feng

    2016-04-01

    The transport properties of the photons injected into one-dimensional coupled-cavity arrays (CCAs) are studied. It is found that the number of photons cannot change the evolution cycle of the system and the time points at which W states and NOON state are obtained with a relatively higher probability. Transport dynamics in the CCAs exhibits that entanglement-enhanced state transmission is more effective phenomenon, and we show that for a quantum state with the maximum concurrence, it can be transmitted completely without considering the case of photon loss.

  20. Monte Carlo Simulations on Neutron Transport and Absorbed Dose in Tissue-Equivalent Phantoms Exposed to High-Flux Epithermal Neutron Beams

    NASA Astrophysics Data System (ADS)

    Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.

    2010-04-01

    Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.

  1. Transport calculations for a 14.8 MeV neutron beam in a water phantom

    NASA Astrophysics Data System (ADS)

    Goetsch, S. J.

    A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.

  2. Verification by Monte Carlo methods of a power law tissue-air ratio algorithm for inhomogeneity corrections in photon beam dose calculations.

    PubMed

    Webb, S; Fox, R A

    1980-03-01

    A Monte Carlo computer program has been used to calculate axial and off-axis depth dose distributions arising from the interaction of an external beam of 60Co radiation with a medium containing inhomogeneities. An approximation for applying the Monte Carlo data to the configuration where the lateral extent of the inhomogeneity is less than the beam area, is also presented. These new Monte Carlo techniques rely on integration over the dose distributions from constituent sub-beams of small area and the accuracy of the method is thus independent of beam size. The power law correction equation (Batho equation) describing the dose distribution in the presence of tissue inhomogeneities is derived in its most general form. By comparison with Monte Carlo reference data, the equation is validated for routine patient dosimetry. It is explained why the Monte Carlo data may be regarded as a fundamental reference point in performing these tests of the extension to the Batho equation. Other analytic correction techniques, e.g. the equivalent radiological path method, are shown to be less accurate. The application of the generalised power law equation in conjunction with CT scanner data is discussed. For ease of presentation, the details of the Monte Carlo techniques and the analytic formula have been separated into appendices. PMID:7384209

  3. Full-Band Particle-Based Monte-Carlo Simulation with Anharmonic Corrections for Phonon Transport in III-N Nanostructures

    NASA Astrophysics Data System (ADS)

    Sundaresan, Sasi; Jayasekera, Thushari; Ahmed, Shaikh

    2014-03-01

    Monte Carlo based statistical approach to solve Boltzmann Transport Equation (BTE) has become a norm to investigate heat transport in semiconductors at sub-micron regime, owing to its ability to characterize realistically sized device geometries qualitatively. One weakness of this technique is that the approach predominantly uses empirically fitted phonon dispersion relation as input to determine the properties of phonons and predict the thermal conductivity for a specified material geometry. The empirically fitted dispersion relations assume harmonic approximation, thereby failing to account for thermal expansion, effects of strain on spring stiffness, and accurate phonon-phonon interactions. To account for the anharmonic contributions in the calculation of thermal conductivity, in this work, we employ a coupled molecular mechanics-Monte Carlo (MM-MC) approach. The atomistically-resolved non-deterministic approach adopted in this work is found to produce satisfactory results on heat transport and thermal conductivity in both ballistic and diffusive regimes for III-N nanostructures. Supported by the U.S. National Science Foundation Grant No. CCF-1218839.

  4. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs.

    PubMed

    Rodriguez, M; Sempau, J; Brualla, L

    2012-05-21

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. PMID:22538321

  5. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Sempau, J.; Brualla, L.

    2012-05-01

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

  6. MCNP/X TRANSPORT IN THE TABULAR REGIME

    SciTech Connect

    HUGHES, H. GRADY

    2007-01-08

    The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

  7. An Electron/Photon/Relaxation Data Library for MCNP6

    SciTech Connect

    Hughes, III, H. Grady

    2015-08-07

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  8. Comparison of photon counting and conventional scintillation detectors in a pinhole SPECT system for small animal imaging: Monte carlo simulation studies

    NASA Astrophysics Data System (ADS)

    Lee, Young-Jin; Park, Su-Jin; Lee, Seung-Wan; Kim, Dae-Hong; Kim, Ye-Seul; Kim, Hee-Joung

    2013-05-01

    The photon counting detector based on cadmium telluride (CdTe) or cadmium zinc telluride (CZT) is a promising imaging modality that provides many benefits compared to conventional scintillation detectors. By using a pinhole collimator with the photon counting detector, we were able to improve both the spatial resolution and the sensitivity. The purpose of this study was to evaluate the photon counting and conventional scintillation detectors in a pinhole single-photon emission computed tomography (SPECT) system. We designed five pinhole SPECT systems of two types: one type with a CdTe photon counting detector and the other with a conventional NaI(Tl) scintillation detector. We conducted simulation studies and evaluated imaging performance. The results demonstrated that the spatial resolution of the CdTe photon counting detector was 0.38 mm, with a sensitivity 1.40 times greater than that of a conventional NaI(Tl) scintillation detector for the same detector thickness. Also, the average scatter fractions of the CdTe photon counting and the conventional NaI(Tl) scintillation detectors were 1.93% and 2.44%, respectively. In conclusion, we successfully evaluated various pinhole SPECT systems for small animal imaging.

  9. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.

    TOXLINE Toxicology Bibliographic Information

    Wang YY; Peng X; Alharbi M; Dutin CF; Bradley TD; Gérôme F; Mielke M; Booth T; Benabid F

    2012-08-01

    We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air.

  10. Utilizing Monte-Carlo radiation transport and spallation cross sections to estimate nuclide dependent scaling with altitude

    NASA Astrophysics Data System (ADS)

    Argento, D.; Reedy, R. C.; Stone, J.

    2010-12-01

    Cosmogenic Nuclides (CNs) are a critical new tool for geomorphology, allowing researchers to date Earth surface events and measure process rates [1]. Prior to CNs, many of these events and processes had no absolute method for measurement and relied entirely on relative methods [2]. Continued improvements in CN methods are necessary for expanding analytic capability in geomorphology. In the last two decades, significant progress has been made in refining these methods and reducing analytic uncertainties [1,3]. Calibration data and scaling methods are being developed to provide a self consistent platform for use in interpreting nuclide concentration values into geologic data [4]. However, nuclide dependent scaling has been difficult to address due to analytic uncertainty and sparseness in altitude transects. Artificial target experiments are underway, but these experiments take considerable time for nuclide buildup in lower altitudes. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray radiation impinging on the upper atmosphere and track the resulting secondary particles through a model of the Earth’s atmosphere and lithosphere. To address the issue of nuclide dependent scaling, the neutron flux values determined by the MCNPX simulation are folded in with estimated cross-section values [5,6]. Preliminary calculations indicate that scaling of nuclide production potential in free air seems to be a function of both altitude and nuclide production pathway. At 0 g/cm2 (sea-level) all neutron spallation pathways have attenuation lengths within 1% of 130 g/cm2. However, the differences in attenuation length are exacerbated with increasing altitude. At 530 g/cm2 atmospheric height (~5,500 m), the apparent attenuation lengths for aggregate SiO2(n,x)10Be, aggregate SiO2(n,x)14C and K(n,x)36Cl become 149.5 g/cm2, 151 g/cm2 and 148 g/cm2 respectively. At 700 g/cm2 atmospheric height (~8,400m - close to the highest possible sampling altitude), the apparent attenuation lengths become 171 g/cm2, 174 g/cm2 and 165 g/cm2 respectively, a difference of +/-5%. Based on this preliminary data, there may be up to 6% error in production rate scaling. Proton spallation is a small, yet important component of spallation events. This data will be also be presented along with the neutron results. While the differences between attenuation length for individual nuclides are small at sea-level, they are systematic and exacerbate with altitude. Until now, there has been no numeric analysis of this phenomenon, therefore the global scaling schemes for CNs have been missing an aspect of physics critical for achieving close agreement between empiric calibration data and physics based models. [1] T. J. Dunai, "Cosmogenic Nuclides: Principles, Concepts and Applications in the Earth Surface Sciences", Cambridge University Press, Cambridge, 2010 [2] D. Lal, Annual Rev of Earth Planet Sci, 1988, p355-388 [3] J. Gosse and F. Phillips, Quaternary Science Rev, 2001, p1475-1560 [4] F. Phillips et al.,(Proposal to the National Science Foundation), 2003 [5] K. Nishiizumi etal., Geochimica et Cosmochimica Acta, 2009, p2163-2176 [6] R. C. Reedy, personal com.

  11. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  12. Comparison of experimental and Monte-Carlo simulation of MeV particle transport through tapered/straight glass capillaries and circular collimators

    NASA Astrophysics Data System (ADS)

    Hespeels, F.; Tonneau, R.; Ikeda, T.; Lucas, S.

    2015-11-01

    This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 ?m microbeam.

  13. Utilization of Monte Carlo Calculations in Radiation Transport Analyses to Support the Design of the U.S. Spallation Neutron Source (SNS)

    SciTech Connect

    Johnson, J.O.

    2000-10-23

    The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.

  14. Monte Carlo tests of small-world architecture for coarse-grained networks of the United States railroad and highway transportation systems

    NASA Astrophysics Data System (ADS)

    Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.

    2015-11-01

    Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.

  15. Calculs Monte Carlo en transport d'energie pour le calcul de la dose en radiotherapie sur plateforme graphique hautement parallele

    NASA Astrophysics Data System (ADS)

    Hissoiny, Sami

    Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.

  16. The role of plasma evolution and photon transport in optimizing future advanced lithography sources

    SciTech Connect

    Sizyuk, Tatyana; Hassanein, Ahmed

    2013-08-28

    Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 μm in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.

  17. Effective QCD and transport description of dilepton and photon production in heavy-ion collisions and elementary processes

    NASA Astrophysics Data System (ADS)

    Linnyk, O.; Bratkovskaya, E. L.; Cassing, W.

    2016-03-01

    In this review we address the dynamics of relativistic heavy-ion reactions and in particular the information obtained from electromagnetic probes that stem from the partonic and hadronic phases. The out-of-equilibrium description of strongly interacting relativistic fields is based on the theory of Kadanoff and Baym. For the modeling of the partonic phase we introduce an effective dynamical quasiparticle model (DQPM) for QCD in equilibrium. In the DQPM, the widths and masses of the dynamical quasiparticles are controlled by transport coefficients that can be compared to the corresponding quantities from lattice QCD. The resulting off-shell transport approach is denoted by Parton-Hadron-String Dynamics (PHSD) and includes covariant dynamical transition rates for hadronization and keeps track of the hadronic interactions in the final phase. It is shown that the PHSD captures the bulk dynamics of heavy-ion collisions from lower SPS to LHC energies and thus provides a solid basis for the evaluation of the electromagnetic emissivity, which is calculated on the basis of the same dynamical parton propagators that are employed for the dynamical evolution of the partonic system. The production of direct photons in elementary processes and heavy-ion reactions is discussed and the present status of the photon v2 "puzzle"-a large elliptic flow v2 of the direct photons experimentally observed in heavy-ion collisions-is addressed for nucleus-nucleus reactions at RHIC and LHC energies. The role of hadronic and partonic sources for the photon spectra and the flow coefficients v2 and v3 is considered as well as the possibility to subtract the QGP signal from the experimental observables. Furthermore, the production of e+e- or μ+μ- pairs in elementary processes and A + A reactions is addressed. The calculations within the PHSD from SIS to LHC energies show an increase of the low mass dilepton yield essentially due to the in-medium modification of the ρ-meson and at the lowest energy also due to a multiple regeneration of Δ-resonances. Furthermore, pronounced traces of the partonic degrees-of-freedom are found in the intermediate dilepton mass regime (1.2 GeV < M < 3 GeV) at relativistic energies, which will also shed light on the nature of the very early degrees-of-freedom in nucleus-nucleus collisions.

  18. Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se

    SciTech Connect

    Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo

    2012-01-15

    Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.

  19. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise

    SciTech Connect

    Fang, Yuan; Karim, Karim S.; Badano, Aldo

    2014-01-15

    Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se, Med. Phys. 39(1), 308319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.

  20. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise

    SciTech Connect

    Fang, Yuan; Karim, Karim S.; Badano, Aldo

    2014-01-15

    Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.

  1. High-speed DC transport of emergent monopoles in spinor photonic fluids.

    PubMed

    Terças, H; Solnyshkov, D D; Malpuech, G

    2014-07-18

    We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles. PMID:25083658

  2. Anisotropy of electrical transport in pnictide superconductors studied using Monte Carlo simulations of the spin-fermion model.

    PubMed

    Liang, Shuhua; Alvarez, Gonzalo; Şen, Cengiz; Moreo, Adriana; Dagotto, Elbio

    2012-07-27

    An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe(2)As(2) detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order. PMID:23006104

  3. Thermal photon, dilepton production, and electric charge transport in a baryon rich strongly coupled QGP from holography

    NASA Astrophysics Data System (ADS)

    Finazzo, Stefano Ivo; Rougemont, Romulo

    2016-02-01

    We obtain the thermal photon and dilepton production rates in a strongly coupled quark-gluon plasma (QGP) at both zero and nonzero baryon chemical potentials using a bottom-up Einstein-Maxwell-dilaton holographic model that is in good quantitative agreement with the thermodynamics of (2 +1 )-flavor lattice QCD around the crossover transition for baryon chemical potentials up to 400 MeV, which may be reached in the beam energy scan at RHIC. We find that increasing the temperature T and the baryon chemical potential μB enhances the peak present in both spectra. We also obtain the electric charge susceptibility, the dc and ac electric conductivities, and the electric charge diffusion as functions of T and μB. We find that electric diffusive transport is suppressed as one increases μB. At zero baryon density, we compare our results for the dc electric conductivity and the electric charge diffusion with the latest lattice data available for these observables and find reasonable agreement around the crossover transition. Therefore, our holographic results may be used to constraint the magnitude of the thermal photon and dilepton production rates in a strongly coupled QGP, which we found to be at least 1 order of magnitude below perturbative estimates.

  4. Weak second-order splitting schemes for Lagrangian Monte Carlo particle methods for the composition PDF/FDF transport equations

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Popov, Pavel P.; Pope, Stephen B.

    2010-03-01

    We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.

  5. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-μm Waveband Photonic Transport System

    NASA Astrophysics Data System (ADS)

    Naokatsu Yamamoto,; Kouichi Akahane,; Tetsuya Kawanishi,; Redouane Katouf,; Hideyuki Sotobayashi,

    2010-04-01

    An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-μm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-μm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-μm waveband photonic transport over a 1.5-km-long holey fiber is obtained.

  6. Quantum Dot Optical Frequency Comb Laser with Mode-Selection Technique for 1-µm Waveband Photonic Transport System

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naokatsu; Akahane, Kouichi; Kawanishi, Tetsuya; Katouf, Redouane; Sotobayashi, Hideyuki

    2010-04-01

    An optical frequency comb was generated from a single quantum dot laser diode (QD-LD) in the 1-µm waveband using an Sb-irradiated InGaAs/GaAs QD active medium. A single-mode-selection technique and interference injection-seeding technique are proposed for selecting the optical mode of a QD optical frequency comb laser (QD-CML). In the 1-µm waveband, a wavelength-tunable single-mode light source and a multiple-wavelength generator of a comb with 100-GHz spacing and ultrafine teeth are successfully demonstrated by applying the optical-mode-selection techniques to the QD-CML. Additionally, by applying the single-mode-selection technique to the QD-CML, a 10-Gbps clear eye opening for multiple-wavelengths in 1-µm waveband photonic transport over a 1.5-km-long holey fiber is obtained.

  7. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  8. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    SciTech Connect

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  9. Conductivity and ordering on the planar honeycomb lattice: A Monte Carlo study of transport and order parameters in beta script aluminas

    NASA Astrophysics Data System (ADS)

    Pechenik, Alexander; Whitmore, D. H.; Ratner, Mark A.

    1986-03-01

    The effects of ordering on the conductivity of carriers on a planar honeycomb lattice is studied using a modified Monte Carlo technique. The hopping model includes repulsion between carriers on nearest-neighbor sites. In agreement with previous work, we find an ordered region on the phase diagram for ion density ρ in the range 0.41≤ρ≤0.59 for sufficiently low temperatures. Within this composition range of the phase diagram, ordering effects on the conductivity and on its temperature dependence are very substantial; previous results which did not correctly include the ordering effects yield quite different conductivities. We also calculate correlation factors, diffusion coefficients, and several different kinds of order parameters. Experiments on systems such as Ba++ β`-alumina and AgCrSe2 should be able to probe these ordered regions, and our results predict the influence of structure on the transport of the mobile ionic species in such compounds.

  10. Retinoblastoma external beam photon irradiation with a special ‘D’-shaped collimator: a comparison between measurements, Monte Carlo simulation and a treatment planning system calculation

    NASA Astrophysics Data System (ADS)

    Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.

    2012-11-01

    Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.

  11. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  12. PENEPMA: a Monte Carlo programme for the simulation of X-ray emission in EPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2016-02-01

    The Monte Carlo programme PENEPMA performs simulations of X-ray emission from samples bombarded with electron beams. It is both based on the general-purpose Monte Carlo simulation package PENELOPE, an elaborate system for the simulation of coupled electron-photon transport in arbitrary materials, and on the geometry subroutine package PENGEOM, which tracks particles through complex material structures defined by quadric surfaces. In this work, we give a brief overview of the capabilities of the latest version of PENEPMA along with several examples of its application to the modelling of electron probe microanalysis measurements.

  13. Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code

    SciTech Connect

    Rhoades, W.A.; Childs, R.L.

    1982-07-01

    DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.

  14. Design and fabrication of hollow-core photonic crystal fibers for high-power ultrashort pulse transportation and pulse compression.

    PubMed

    Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F

    2012-08-01

    We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 μJ, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 μJ laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air. PMID:22859102

  15. Dopamine Transporter Single-Photon Emission Computerized Tomography Supports Diagnosis of Akinetic Crisis of Parkinsonism and of Neuroleptic Malignant Syndrome

    PubMed Central

    Martino, G.; Capasso, M.; Nasuti, M.; Bonanni, L.; Onofrj, M.; Thomas, A.

    2015-01-01

    Abstract Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the “burst striatum” pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755

  16. Vesicle Photonics

    SciTech Connect

    Vasdekis, Andreas E.; Scott, E. A.; Roke, Sylvie; Hubbell, J. A.; Psaltis, D.

    2013-04-03

    Thin membranes, under appropriate boundary conditions, can self-assemble into vesicles, nanoscale bubbles that encapsulate and hence protect or transport molecular payloads. In this paper, we review the types and applications of light fields interacting with vesicles. By encapsulating light-emitting molecules (e.g. dyes, fluorescent proteins, or quantum dots), vesicles can act as particles and imaging agents. Vesicle imaging can take place also under second harmonic generation from vesicle membrane, as well as employing mass spectrometry. Light fields can also be employed to transport vesicles using optical tweezers (photon momentum) or directly pertrurbe the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy).

  17. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  18. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides

    SciTech Connect

    Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.

    2014-01-20

    To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.

  19. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides

    NASA Astrophysics Data System (ADS)

    Giden, I. H.; Yilmaz, D.; Turduev, M.; Kurt, H.; ćolak, E.; Ozbay, E.

    2014-01-01

    To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6 nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5-1576.1 nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz-13.3 GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.

  20. PEREGRINE: An all-particle Monte Carlo code for radiation therapy

    SciTech Connect

    Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.

    1994-09-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources.

  1. Monte Carlo analysis of transient electron transport in wurtzite Zn1-xMgxO combined with first principles calculations

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Hu, Linlin; Yang, Yintang; Shan, Xuefei; Song, Jiuxu; Guo, Lixin; Zhang, Zhiyong

    2015-01-01

    Transient characteristics of wurtzite Zn1-xMgxO are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn1-xMgxO (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 107, 2.133 × 107, 1.889 × 107, 1.295 × 107 cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn0.889Mg0.111O and Zn0.833Mg0.167O respectively, while it is not observed for Zn0.806Mg0.194O and Zn0.75Mg0.25O even at 3000 kV/cm which is especially important for high frequency devices.

  2. Photon-assisted electronic and spin transport in a junction containing precessing molecular spin

    NASA Astrophysics Data System (ADS)

    Filipović, Milena; Belzig, Wolfgang

    2016-02-01

    We study the ac charge and -spin transport through an orbital of a magnetic molecule with spin precessing in a constant magnetic field. We assume that the source and drain contacts have time-dependent chemical potentials. We employ the Keldysh nonequilibrium Green's functions method to calculate the spin and charge currents to linear order in the time-dependent potentials. The molecular and electronic spins are coupled via exchange interaction. The time-dependent molecular spin drives inelastic transitions between the molecular quasienergy levels, resulting in a rich structure in the transport characteristics. The time-dependent voltages allow us to reveal the internal precession time scale (the Larmor frequency) by a dc conductance measurement if the ac frequency matches the Larmor frequency. In the low-ac-frequency limit the junction resembles a classical electric circuit. Furthermore, we show that the setup can be used to generate dc-spin currents, which are controlled by the molecular magnetization direction and the relative phases between the Larmor precession and the ac voltage.

  3. Two-photon transport in a waveguide coupled to a cavity in a two-level system

    SciTech Connect

    Shi, T.; Sun, C. P.; Fan Shanhui

    2011-12-15

    We study two-photon effects for a cavity quantum electrodynamics system where a waveguide is coupled to a cavity embedded in a two-level system. The wave function of two-photon scattering is exactly solved by using the Lehmann-Symanzik-Zimmermann reduction. Our results about quantum statistical properties of the outgoing photons explicitly exhibit the photon blockade effects in the strong-coupling regime. These results agree with the observations of recent experiments.

  4. First-principle-based full-dispersion Monte Carlo simulation of the anisotropic phonon transport in the wurtzite GaN thin film

    NASA Astrophysics Data System (ADS)

    Wu, Ruikang; Hu, Run; Luo, Xiaobing

    2016-04-01

    In this study, we developed a first-principle-based full-dispersion Monte Carlo simulation method to study the anisotropic phonon transport in wurtzite GaN thin film. The input data of thermal properties in MC simulations were calculated based on the first-principle method. The anisotropy of thermal conductivity in bulk wurtzite GaN is found to be strengthened by isotopic scatterings and reduced temperature, and the anisotropy reaches 40.08% for natural bulk GaN at 100 K. With the GaN thin film thickness decreasing, the anisotropy of the out-of-plane thermal conductivity is heavily reduced due to both the ballistic transport and the less importance of the low-frequency phonons with anisotropic group velocities. On the contrary, it is observed that the in-plane thermal conductivity anisotropy of the GaN thin film is strengthened by reducing the film thickness. And the anisotropy reaches 35.63% when the natural GaN thin film thickness reduces to 50 nm at 300 K with the degree of specularity being zero. The anisotropy is also improved by increasing the surface roughness of the GaN thin film.

  5. A multi-subband Monte Carlo study on dominance of scattering mechanisms over carrier transport in sub-10-nm Si nanowire FETs

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon

    2016-01-01

    Dominance of various scattering mechanisms in determination of the carrier mobility is examined for silicon (Si) nanowires of sub-10-nm cross-sections. With a focus on p-type channels, the steady-state hole mobility is studied with multi-subband Monte Carlo simulations to consider quantum effects in nanoscale channels. Electronic structures of gate-all-around nanowires are described with a 6-band k · p model. Channel bandstructures and electrostatics under gate biases are determined self-consistently with Schrödinger-Poisson simulations. Modeling results not only indicate that the hole mobility is severely degraded as channels have smaller cross-sections and are inverted more strongly but also confirm that the surface roughness scattering degrades the mobility more severely than the phonon scattering does. The surface roughness scattering affects carrier transport more strongly in narrower channels, showing ˜90 % dominance in determination of the mobility. At the same channel population, [110] channels suffer from the surface roughness scattering more severely than [100] channels do, due to the stronger corner effect and larger population of carriers residing near channel surfaces. With a sound theoretical framework coupled to the spatial distribution of channel carriers, this work may present a useful guideline for understanding hole transport in ultra-narrow Si nanowires.

  6. Transport through an Anderson impurity: Current ringing, nonlinear magnetization, and a direct comparison of continuous-time quantum Monte Carlo and hierarchical quantum master equations

    NASA Astrophysics Data System (ADS)

    Härtle, R.; Cohen, G.; Reichman, D. R.; Millis, A. J.

    2015-08-01

    We give a detailed comparison of the hierarchical quantum master equation (HQME) method to a continuous-time quantum Monte Carlo (CT-QMC) approach, assessing the usability of these numerically exact schemes as impurity solvers in practical nonequilibrium calculations. We review the main characteristics of the methods and discuss the scaling of the associated numerical effort. We substantiate our discussion with explicit numerical results for the nonequilibrium transport properties of a single-site Anderson impurity. The numerical effort of the HQME scheme scales linearly with the simulation time but increases (at worst exponentially) with decreasing temperature. In contrast, CT-QMC is less restricted by temperature at short times, but in general the cost of going to longer times is also exponential. After establishing the numerical exactness of the HQME scheme, we use it to elucidate the influence of different ways to induce transport through the impurity on the initial dynamics, discuss the phenomenon of coherent current oscillations, known as current ringing, and explain the nonmonotonic temperature dependence of the steady-state magnetization as a result of competing broadening effects. We also elucidate the pronounced nonlinear magnetization dynamics, which appears on intermediate time scales in the presence of an asymmetric coupling to the electrodes.

  7. A multi-subband Monte Carlo study on dominance of scattering mechanisms over carrier transport in sub-10-nm Si nanowire FETs.

    PubMed

    Ryu, Hoon

    2016-12-01

    Dominance of various scattering mechanisms in determination of the carrier mobility is examined for silicon (Si) nanowires of sub-10-nm cross-sections. With a focus on p-type channels, the steady-state hole mobility is studied with multi-subband Monte Carlo simulations to consider quantum effects in nanoscale channels. Electronic structures of gate-all-around nanowires are described with a 6-band k · p model. Channel bandstructures and electrostatics under gate biases are determined self-consistently with Schrödinger-Poisson simulations. Modeling results not only indicate that the hole mobility is severely degraded as channels have smaller cross-sections and are inverted more strongly but also confirm that the surface roughness scattering degrades the mobility more severely than the phonon scattering does. The surface roughness scattering affects carrier transport more strongly in narrower channels, showing ∼90 % dominance in determination of the mobility. At the same channel population, [110] channels suffer from the surface roughness scattering more severely than [100] channels do, due to the stronger corner effect and larger population of carriers residing near channel surfaces. With a sound theoretical framework coupled to the spatial distribution of channel carriers, this work may present a useful guideline for understanding hole transport in ultra-narrow Si nanowires. PMID:26815605

  8. Modeling the uniform transport in thin film SOI MOSFETs with a Monte-Carlo simulator for the 2D electron gas

    NASA Astrophysics Data System (ADS)

    Lucci, Luca; Palestri, Pierpaolo; Esseni, David; Selmi, Luca

    2005-09-01

    In this paper, we present simulations of some of the most relevant transport properties of the inversion layer of ultra-thin film SOI devices with a self-consistent Monte-Carlo transport code for a confined electron gas. We show that size induced quantization not only decreases the low-field mobility (as experimentally found in [Uchida K, Koga J, Ohba R, Numata T, Takagi S. Experimental eidences of quantum-mechanical effects on low-field mobility, gate-channel capacitance and threshold voltage of ultrathin body SOI MOSFETs, IEEE IEDM Tech Dig 2001;633-6; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E. Low field electron and hole mobility of SOI transistors fabricated on ultra-thin silicon films for deep sub-micron technology application. IEEE Trans Electron Dev 2001;48(12):2842-50; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E, An experimental study of mobility enhancement in ultra-thin SOI transistors operated in double-gate mode, IEEE Trans Electron Dev 2003;50(3):802-8. [1-3

  9. Parallel Finite Element Electron-Photon Transport Analysis on 2-D Unstructured Mesh

    SciTech Connect

    Drumm, C.R.

    1999-01-01

    A computer code has been developed to solve the linear Boltzmann transport equation on an unstructured mesh of triangles, from a Pro/E model. An arbitriwy arrangement of distinct material regions is allowed. Energy dependence is handled by solving over an arbitrary number of discrete energy groups. Angular de- pendence is treated by Legendre-polynomial expansion of the particle cross sections and a discrete ordinates treatment of the particle fluence. The resulting linear system is solved in parallel with a preconditioned conjugate-gradients method. The solution method is unique, in that the space-angle dependence is solved si- multaneously, eliminating the need for the usual inner iterations. Electron cross sections are obtained from a Goudsrnit-Saunderson modifed version of the CEPXS code. A one-dimensional version of the code has also been develop@ for testing and development purposes.

  10. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M.; Daly, Thomas P.; Moses, Edward I.; Patterson, Jr., Ralph W.; Schach von Wittenau, Alexis E.; Garrett, Dewey N.; House, Ronald K.; Hartmann-Siantar, Christine L.; Cox, Lawrence J.; Fujino, Donald H.

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  11. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  12. Computing Numerically-Optimal Bounding Boxes for Constructive Solid Geometry (CSG) Components in Monte Carlo Particle Transport Calculations

    NASA Astrophysics Data System (ADS)

    Millman, David L.; Griesheimer, David P.; Nease, Brian R.; Snoeyink, Jack

    2014-06-01

    For large, highly detailed models, Monte Carlo simulations may spend a large fraction of their run-time performing simple point location and distance to surface calculations for every geometric component in a model. In such cases, the use of bounding boxes (axis-aligned boxes that bound each geometric component) can improve particle tracking efficiency and decrease overall simulation run time significantly. In this paper we present a robust and efficient algorithm for generating the numerically-optimal bounding box (optimal to within a user-specified tolerance) for an arbitrary Constructive Solid Geometry (CSG) object defined by quadratic surfaces. The new algorithm uses an iterative refinement to tighten an initial, conservatively large, bounding box into the numerically-optimal bounding box. At each stage of refinement, the algorithm subdivides the candidate bounding box into smaller boxes, which are classified as inside, outside, or intersecting the boundary of the component. In cases where the algorithm cannot unambiguously classify a box, the box is refined further. This process continues until the refinement near the component's extremal points reach the user-selected tolerance level. This refinement/classification approach is more efficient and practical than methods that rely on computing actual boundary representations or sampling to determine the extent of an arbitrary CSG component. A complete description of the bounding box algorithm is presented, along with a proof that the algorithm is guaranteed to converge to within specified tolerance of the true optimal bounding box. The paper also provides a discussion of practical implementation details for the algorithm as well as numerical results highlighting performance and accuracy for several representative CSG components.

  13. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.

  14. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  15. The association between heroin expenditure and dopamine transporter availability--a single-photon emission computed tomography study.

    PubMed

    Lin, Shih-Hsien; Chen, Kao Chin; Lee, Sheng-Yu; Chiu, Nan Tsing; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Chen, Chia-Chieh; Liao, Mei-Hsiu; Yang, Yen Kuang

    2015-03-30

    One of the consequences of heroin dependency is a huge expenditure on drugs. This underlying economic expense may be a grave burden for heroin users and may lead to criminal behavior, which is a huge cost to society. The neuropsychological mechanism related to heroin purchase remains unclear. Based on recent findings and the established dopamine hypothesis of addiction, we speculated that expenditure on heroin and central dopamine activity may be associated. A total of 21 heroin users were enrolled in this study. The annual expenditure on heroin was assessed, and the availability of the dopamine transporter (DAT) was assessed by single-photon emission computed tomography (SPECT) using [(99m)TC]TRODAT-1. Parametric and nonparametric correlation analyses indicated that annual expenditure on heroin was significantly and negatively correlated with the availability of striatal DAT. After adjustment for potential confounders, the predictive power of DAT availability was significant. Striatal dopamine function may be associated with opioid purchasing behavior among heroin users, and the cycle of spiraling dysfunction in the dopamine reward system could play a role in this association. PMID:25659472

  16. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    PubMed Central

    Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  17. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media.

    PubMed

    Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  18. The FERMI@Elettra free-electron-laser source for coherent X-ray physics: photon properties, beam transport system, and applications

    SciTech Connect

    Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio

    2010-04-05

    FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.

  19. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    SciTech Connect

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S

    2014-06-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.

  20. A standard timing benchmark for EGS4 Monte Carlo calculations.

    PubMed

    Bielajew, A F; Rogers, D W

    1992-01-01

    A Fortran 77 Monte Carlo source code built from the EGS4 Monte Carlo code system has been used for timing benchmark purposes on 29 different computers. This code simulates the deposition of energy from an incident electron beam in a 3-D rectilinear geometry such as one would employ to model electron and photon transport through a series of CT slices. The benchmark forms a standalone system and does not require that the EGS4 system be installed. The Fortran source code may be ported to different architectures by modifying a few lines and only a moderate amount of CPU time is required ranging from about 5 h on PC/386/387 to a few seconds on a massively parallel supercomputer (a BBN TC2000 with 512 processors). PMID:1584121

  1. Comparison of pencil-beam, collapsed-cone and Monte-Carlo algorithms in radiotherapy treatment planning for 6-MV photons

    NASA Astrophysics Data System (ADS)

    Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho

    2015-07-01

    Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.

  2. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods: A Case Study at the South Oyster Bacterial Transport Site in Virginia

    SciTech Connect

    Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.

    2004-12-22

    The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.

  3. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. PMID:24162375

  4. Monte Carlo transport model comparison with 1A GeV accelerated iron experiment: heavy-ion shielding evaluation of NASA space flight-crew foodstuff

    NASA Technical Reports Server (NTRS)

    Stephens, D. L. Jr; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.

    2002-01-01

    Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R2 respectively. c2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.

  5. Monte Carlo transport model comparison with 1A GeV accelerated iron experiment: heavy-ion shielding evaluation of NASA space flight-crew foodstuff.

    PubMed

    Stephens, D L; Townsend, L W; Miller, J; Zeitlin, C; Heilbronn, L

    2002-01-01

    Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R2 respectively. PMID:12539754

  6. Monte Carlo transport model comparison with 1A GeV accelerated iron experiment: heavy-ion shielding evaluation of NASA space flight-crew foodstuff

    NASA Astrophysics Data System (ADS)

    Stephens, D. L.; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.

    Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71 st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R 2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R 2 respectively.

  7. SU-F-18C-09: Assessment of OSL Dosimeter Technology in the Validation of a Monte Carlo Radiation Transport Code for CT Dosimetry

    SciTech Connect

    Carver, D; Kost, S; Pickens, D; Price, R; Stabin, M

    2014-06-15

    Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width of 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.