Science.gov

Sample records for carlo photon transport

  1. Methodology of Continuous-Energy Adjoint Monte Carlo for Neutron, Photon, and Coupled Neutron-Photon Transport

    SciTech Connect

    Hoogenboom, J. Eduard

    2003-02-15

    Adjoint Monte Carlo may be a useful alternative to regular Monte Carlo calculations in cases where a small detector inhibits an efficient Monte Carlo calculation as only very few particle histories will cross the detector. However, in general purpose Monte Carlo codes, normally only the multigroup form of adjoint Monte Carlo is implemented. In this article the general methodology for continuous-energy adjoint Monte Carlo neutron transport is reviewed and extended for photon and coupled neutron-photon transport. In the latter cases the discrete photons generated by annihilation or by neutron capture or inelastic scattering prevent a direct application of the general methodology. Two successive reaction events must be combined in the selection process to accommodate the adjoint analog of a reaction resulting in a photon with a discrete energy. Numerical examples illustrate the application of the theory for some simplified problems.

  2. Systems guide to MCNP (Monte Carlo Neutron and Photon Transport Code)

    SciTech Connect

    Kirk, B.L.; West, J.T.

    1984-06-01

    The subject of this report is the implementation of the Los Alamos National Laboratory Monte Carlo Neutron and Photon Transport Code - Version 3 (MCNP) on the different types of computer systems, especially the IBM MVS system. The report supplements the documentation of the RSIC computer code package CCC-200/MCNP. Details of the procedure to follow in executing MCNP on the IBM computers, either in batch mode or interactive mode, are provided.

  3. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  4. Monte Carlo photon transport on vector and parallel superconductors: Final report

    SciTech Connect

    Martin, W.R.; Nowak, P.F.

    1987-09-30

    The vectorized Monte Carlo photon transport code VPHOT has been developed for the Cray-1, Cray-XMP, and Cray-2 computers. The effort in the current project was devoted to multitasking the VPHOT code and implement it on the Cray X-MP and Cray-2 parallel-vector supercomputers, examining the robustness of the vectorized algorithm for changes in the physics of the test problems, and evaluating the efficiency of alternative algorithms such as the ''stack-driven'' algorithm of Bobrowicz for possible incorporation into VPHOT. These tasks are discussed in this paper. 4 refs.

  5. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    SciTech Connect

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  6. MCNP: a general Monte Carlo code for neutron and photon transport

    SciTech Connect

    Forster, R.A.; Godfrey, T.N.K.

    1985-01-01

    MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.

  7. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  8. penORNL: a parallel monte carlo photon and electron transport package using PENELOPE

    SciTech Connect

    Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    2015-01-01

    The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.

  9. Space applications of the MITS electron-photon Monte Carlo transport code system

    SciTech Connect

    Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.

    1996-07-01

    The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.

  10. A Coupled Neutron-Photon 3-D Combinatorial Geometry Monte Carlo Transport Code

    Energy Science and Technology Software Center (ESTSC)

    1998-06-12

    TART97 is a coupled neutron-photon, 3 dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly fast: if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system canmore »save you a great deal of time and energy. TART 97 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and ist data files.« less

  11. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes

    SciTech Connect

    Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.

    1993-06-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.

  12. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  13. Monte Carlo electron-photon transport using GPUs as an accelerator: Results for a water-aluminum-water phantom

    SciTech Connect

    Su, L.; Du, X.; Liu, T.; Xu, X. G.

    2013-07-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)

  14. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    SciTech Connect

    Morgan C. White

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.

  15. A Monte Carlo study of high-energy photon transport in matter: application for multiple scattering investigation in Compton spectroscopy.

    PubMed

    Brancewicz, Marek; Itou, Masayoshi; Sakurai, Yoshiharu

    2016-01-01

    The first results of multiple scattering simulations of polarized high-energy X-rays for Compton experiments using a new Monte Carlo program, MUSCAT, are presented. The program is developed to follow the restrictions of real experimental geometries. The new simulation algorithm uses not only well known photon splitting and interaction forcing methods but it is also upgraded with the new propagation separation method and highly vectorized. In this paper, a detailed description of the new simulation algorithm is given. The code is verified by comparison with the previous experimental and simulation results by the ESRF group and new restricted geometry experiments carried out at SPring-8. PMID:26698070

  16. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  17. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    SciTech Connect

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.

  18. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  19. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  20. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  1. Coupled electron-photon radiation transport

    SciTech Connect

    Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.

    2000-01-17

    Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.

  2. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  3. Photon transport in binary photonic lattices

    NASA Astrophysics Data System (ADS)

    Rodríguez-Lara, B. M.; Moya-Cessa, H.

    2013-03-01

    We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.

  4. Parallel processing Monte Carlo radiation transport codes

    SciTech Connect

    McKinney, G.W.

    1994-02-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.

  5. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    SciTech Connect

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  6. Applications of the Monte Carlo radiation transport toolkit at LLNL

    NASA Astrophysics Data System (ADS)

    Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine

    1999-09-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

  7. Evaluation of bremsstrahlung contribution to photon transport in coupled photon-electron problems

    NASA Astrophysics Data System (ADS)

    Fernández, Jorge E.; Scot, Viviana; Di Giulio, Eugenio; Salvat, Francesc

    2015-11-01

    The most accurate description of the radiation field in x-ray spectrometry requires the modeling of coupled photon-electron transport. Compton scattering and the photoelectric effect actually produce electrons as secondary particles which contribute to the photon field through conversion mechanisms like bremsstrahlung (which produces a continuous photon energy spectrum) and inner-shell impact ionization (ISII) (which gives characteristic lines). The solution of the coupled problem is time consuming because the electrons interact continuously and therefore, the number of electron collisions to be considered is always very high. This complex problem is frequently simplified by neglecting the contributions of the secondary electrons. Recent works (Fernández et al., 2013; Fernández et al., 2014) have shown the possibility to include a separately computed coupled photon-electron contribution like ISII in a photon calculation for improving such a crude approximation while preserving the speed of the pure photon transport model. By means of a similar approach and the Monte Carlo code PENELOPE (coupled photon-electron Monte Carlo), the bremsstrahlung contribution is characterized in this work. The angular distribution of the photons due to bremsstrahlung can be safely considered as isotropic, with the point of emission located at the same place of the photon collision. A new photon kernel describing the bremsstrahlung contribution is introduced: it can be included in photon transport codes (deterministic or Monte Carlo) with a minimal effort. A data library to describe the energy dependence of the bremsstrahlung emission has been generated for all elements Z=1-92 in the energy range 1-150 keV. The bremsstrahlung energy distribution for an arbitrary energy is obtained by interpolating in the database. A comparison between a PENELOPE direct simulation and the interpolated distribution using the data base shows an almost perfect agreement. The use of the data base increases the calculation speed by several magnitude orders.

  8. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    SciTech Connect

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    1989-01-01

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

  9. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  10. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  11. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  12. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  13. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  14. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  15. Monte Carlo method for photon heating using temperature-dependent optical properties.

    PubMed

    Slade, Adam Broadbent; Aguilar, Guillermo

    2015-02-01

    The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656

  16. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    NASA Astrophysics Data System (ADS)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.

  17. Low variance methods for Monte Carlo simulation of phonon transport

    E-print Network

    Péraud, Jean-Philippe M. (Jean-Philippe Michel)

    2011-01-01

    Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...

  18. Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Plantnick, Steven

    1999-01-01

    The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.

  19. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  20. More realistic Monte Carlo calculations of photon detector response functions

    NASA Astrophysics Data System (ADS)

    Rogers, D. W. O.

    1982-08-01

    The Monte Carlo electron-photon simulation package EGS3 has been used to calculate response functions for a wide variety of nuclear detectors. The detectors must be cylindrical and can be encased in a jacket or shielded by a plate of arbitrary material. The detector can have inert volumes. The code, via EGS, includes all physical processes necessary for accurate calculations for incident photons above 300 keV. An error in EGS concerning terminal processing of positrons has been found and corrected. The code runs 3 to 5 times faster than CYLTRAN. The paper presents benchmark comparisons between EGS and a wide variety of previous NaI and Ge(Li) response function calculations, in particular those of ETRAN where small but systematic differences were observed above 10 MeV incident energy. The effects of detector cladding and shielding have been studied. The program quantitatively explains the effects of a 1.18 g/cm 2 Be beta absorber and qualitatively explains the 511 keV peak present for incident high energy photons. To first order it was found that only for the photopeak efficiencies can the effect of material in front of a detector be treated as a simple absorption. The reduction in the efficiency for counts within 1.5 MeV of the incident energy is considrably less than expected using only the cross section and the reduction for escape peaks is significantly more due to reflections of 511 keV photons back into the detector. Calculations of absolute Ge(Li) detector efficiency were found to be difficult due to sensitivity to inert layer parameters but relative efficiency curves were much less sensitive. Calculations are included of response functions for bismuth germanate and large NaI detectors and angular scans of Ge(Li) detectors for solid angle correction factors. Difficulties calculating electron response functions are discussed.

  1. Review of Monte Carlo modeling of light transport in tissues.

    PubMed

    Zhu, Caigang; Liu, Quan

    2013-05-01

    A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed. PMID:23698318

  2. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    SciTech Connect

    Bal, Guillaume; Davis, Anthony B.; Langmore, Ian

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

  3. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    SciTech Connect

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.

  4. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  5. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy

    2015-09-01

    Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ? 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.

  6. Monte Carlo simulation accuracy for calibrating germanium detector photon efficiency

    SciTech Connect

    Kamboj, Sunita; Kahn, B.

    1997-08-01

    Over the past 30 years, Monte Carlo simulation of photons interacting with matter has gradually improved to the extent that it now appears suitable for calibrating germanium detectors for counting efficiency in gamma-ray spectral analysis. The process is particularly useful because it can be applied for a variety of source shapes and spatial relations between source and detector by simply redefining the geometry, whereas calibration with radioactive standards requires a separate set of measurements for each source shape and location relative to the detector. Simulation accuracy was evaluated for two large (126% and 110%) and one medium-sized (20%) detectors with radioactive point sources at distances of 10 m, 1.6 m, and 0.50 m and with aqueous solutions in a 0.5-L reentrant beaker and in jars of similar volume but various dimensions. The sensitivity in comparing measured and simulated results was limited by a combined uncertainty of about 3% in the radioactive standards and experimental conditions. Simulation was performed with the MCNP-4 code.

  7. Generalized coupled photon transport equations for handling correlated photon streams with distinct frequencies.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin; Nahavandi, Saeid

    2012-08-15

    A generalized form of coupled photon transport equations that can handle correlated light beams with distinct frequencies is introduced. The derivation is based on the principle of energy conservation. For a single frequency, the current formulation reduces to a standard photon transport equation, and for fluorescence and phosphorescence, the diffusion models derived from the proposed photon transport model match for homogenous media. The generalized photon transport model is extended to handle wideband inputs in the frequency domain. PMID:23381285

  8. Monte Carlo Simulation of Light Transport in Tissue, Beta Version

    Energy Science and Technology Software Center (ESTSC)

    2003-12-09

    Understanding light-tissue interaction is fundamental in the field of Biomedical Optics. It has important implications for both therapeutic and diagnostic technologies. In this program, light transport in scattering tissue is modeled by absorption and scattering events as each photon travels through the tissue. the path of each photon is determined statistically by calculating probabilities of scattering and absorption. Other meausured quantities are total reflected light, total transmitted light, and total heat absorbed.

  9. Optics and photonics used in road transportation

    NASA Astrophysics Data System (ADS)

    Gingras, Denis J.

    1998-09-01

    Photonics is ideal for precise, remote and contactless measurements in harsh conditions. Thanks to major breakthroughs in the technologies involved, optical sensing is becoming more compact, robust and affordable. The purpose of this paper is to provide an overview on the capabilities of photonics applied to road transportation problems. In particular we will consider four types of situations: (1) measurements for traffic analysis and surveillance, (2) measurements for road infrastructures diagnosis and quality assessment, (3) photonics in smart driving and intelligent vehicles and (4) measurements for other purposes (safety, inventories, tolls etc.). These topics will be discussed and illustrated by using the results of different projects that have been carried out at INO over the last few years. We will look at different challenges we had to face such as performing sensitive optical measurements in various outdoor illumination conditions and performing fast and accurate measurements without interfering with normal road traffic flow.

  10. Efficient, automated Monte Carlo methods for radiation transport

    SciTech Connect

    Kong Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

  11. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Akhavan, Azadeh; Vosoughi, Naser

    2015-12-01

    Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.

  12. Accurate and efficient Monte Carlo solutions to the radiative transport equation in the spatial frequency domain

    PubMed Central

    Gardner, Adam R.; Venugopalan, Vasan

    2012-01-01

    We present an approach to solving the radiative transport equation (RTE) for layered media in the spatial frequency domain (SFD) using Monte Carlo (MC) simulations. This is done by obtaining a complex photon weight from analysis of the Fourier transform of the RTE. We also develop a modified shortcut method that enables a single MC simulation to efficiently provide RTE solutions in the SFD for any number of spatial frequencies. We provide comparisons between the modified shortcut method and conventional discrete transform methods for SFD reflectance. Further results for oblique illumination illustrate the potential diagnostic utility of the SFD phase-shifts for analysis of layered media. PMID:21685989

  13. Photon-induced carrier transport in high efficiency midinfrared quantum cascade lasers

    E-print Network

    Mátyás, Alpár; Jirauschek, Christian; 10.1063/1.3608116

    2011-01-01

    A midinfrared quantum cascade laser with high wall-plug efficiency is analyzed by means of an ensemble Monte Carlo method. Both the carrier transport and the cavity field dynamics are included in the simulation, offering a self-consistent approach for analyzing and optimizing the laser operation. It is shown that at low temperatures, photon emission and absorption can govern the carrier transport in such devices. Furthermore, we find that photon-induced scattering can strongly affect the kinetic electron distributions within the subbands. Our results are validated against available experimental data.

  14. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  15. Neutron streaming Monte Carlo radiation transport code MORSE-CG

    SciTech Connect

    Halley, A.M.; Miller, W.H.

    1986-11-01

    Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.

  16. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin

    2015-07-01

    The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)

  17. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  18. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    NASA Astrophysics Data System (ADS)

    Pratx, Guillem; Xing, Lei

    2011-12-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

  19. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  20. Monte Carlo simulation and experimental measurement of a nonspectroscopic radiation portal monitor for photon detection efficiencies of internally deposited radionuclides

    NASA Astrophysics Data System (ADS)

    Carey, Matthew Glen

    Particle transport of radionuclide photons using the Monte Carlo N-Particle computer code can be used to determine a portal monitor's photon detection efficiency, in units of counts per photon, for internally deposited radionuclides. Good agreement has been found with experimental results for radionuclides that emit higher energy photons, such as Cs-137 and Co-60. Detection efficiency for radionuclides that emit lower energy photons, such as Am-241, greatly depend on the effective discriminator energy level of the portal monitor as well as any attenuating material between the source and detectors. This evaluation uses a chi-square approach to determine the best fit discriminator level of a non-spectroscopic portal monitor when the effective discriminator level, in units of energy, is not known. Internal detection efficiencies were evaluated experimentally using an anthropomorphic phantom with NIST traceable sources at various internal locations, and by simulation using MCNP5. The results of this research find that MCNP5 can be an effective tool for simulation of photon detection efficiencies, given a known discriminator level, for internally and externally deposited radionuclides. In addition, MCNP5 can be used for bounding personnel doses from either internally or externally deposited mixtures of radionuclides.

  1. grmonty: A MONTE CARLO CODE FOR RELATIVISTIC RADIATIVE TRANSPORT

    SciTech Connect

    Dolence, Joshua C.; Gammie, Charles F.; Leung, Po Kin; Moscibrodzka, Monika

    2009-10-01

    We describe a Monte Carlo radiative transport code intended for calculating spectra of hot, optically thin plasmas in full general relativity. The version we describe here is designed to model hot accretion flows in the Kerr metric and therefore incorporates synchrotron emission and absorption, and Compton scattering. The code can be readily generalized, however, to account for other radiative processes and an arbitrary spacetime. We describe a suite of test problems, and demonstrate the expected N {sup -1/2} convergence rate, where N is the number of Monte Carlo samples. Finally, we illustrate the capabilities of the code with a model calculation, a spectrum of the slowly accreting black hole Sgr A* based on data provided by a numerical general relativistic MHD model of the accreting plasma.

  2. Parallel algorithms for Monte Carlo particle transport simulation on exascale computing architectures

    E-print Network

    Romano, Paul K. (Paul Kollath)

    2013-01-01

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...

  3. Controlling photon transport in the single-photon weak-coupling regime of cavity optomechanics

    E-print Network

    Wen-Zhao Zhang; Jiong Cheng; Jing-Yi Liu; Ling Zhou

    2015-07-01

    We study the photon statistics properties of few-photon transport in an optomechanical system where an optomechanical cavity couples to two empty cavities. By analytically deriving the one- and two-photon currents in terms of a zero-time-delayed two-order correlation function, we show that a photon blockade can be achieved in both the single-photon strong-coupling regime and the single-photon weak-coupling regime due to the nonlinear interacting and multipath interference. Furthermore, our systems can be applied as a quantum optical diode, a single-photon source, and a quantum optical capacitor. It is shown that this the photon transport controlling devices based on photon antibunching does not require the stringent single-photon strong-coupling condition. Our results provide a promising platform for the coherent manipulation of optomechanics, which has potential applications for quantum information processing and quantum circuit realization.

  4. Controlling photon transport in the single-photon weak-coupling regime of cavity optomechanics

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Zhao; Cheng, Jiong; Liu, Jing-Yi; Zhou, Ling

    2015-06-01

    We study the photon statistics properties of few-photon transport in an optomechanical system where an optomechanical cavity couples to two empty cavities. By analytically deriving the one- and two-photon currents in terms of a zero-time-delayed two-order correlation function, we show that a photon blockade can be achieved in both the single-photon strong-coupling regime and the single-photon weak-coupling regime due to the nonlinear interacting and multipath interference. Furthermore, our systems can be applied as a quantum optical diode, a single-photon source, and a quantum optical capacitor. It is shown that this the photon transport controlling devices based on photon antibunching does not require the stringent single-photon strong-coupling condition. Our results provide a promising platform for the coherent manipulation of optomechanics, which has potential applications for quantum information processing and quantum circuit realization.

  5. The Implementation of Photon Polarization into the Mercury Transport Code 

    E-print Network

    Windsor, Ethan

    2014-06-04

    Polarization effects have been ignored in most photon transport codes to date, but new technology has created a need for portable, massively parallel, versatile transport codes that include the effects of polarization. In this project, the effects...

  6. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

  7. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  8. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo

    SciTech Connect

    Booth, T.E.

    1998-06-22

    It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.

  9. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  10. Monte Carlo generator photon jets used for luminosity at e+e- colliders

    NASA Astrophysics Data System (ADS)

    Fedotovich, G. V.; Kuraev, E. A.; Sibidanov, A. L.

    2010-06-01

    A Monte-Carlo Generator Photon Jets (MCGPJ) to simulate Bhabha scattering as well as production of two charged muons and two photons events is discussed. The theoretical precision of the cross sections with radiative corrections (RC) is estimated to be smaller than 0.2%. The Next Leading Order (NLO) radiative corrections proportional to ? are treated exactly, whereas the all logarithmically enhanced contributions, related to photon jets emitted in the collinear region, are taken into account in frame of the Structure Function approach. Numerous tests of the MCGPJ as well as a detailed comparison with other MC generators are presented.

  11. Modeling photon transport in transabdominal fetal oximetry

    NASA Astrophysics Data System (ADS)

    Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton

    2000-07-01

    The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.

  12. Monte Carlo simulation of photon buildup factors for shielding materials in diagnostic x-ray facilities

    SciTech Connect

    Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim

    2012-10-15

    Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.

  13. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.

    2013-12-01

    We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.

  14. Using Monte Carlo simulations to understand the influence of photon propagation on photoacoustic spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.; Liu, Bo; Kruger, Robert A.

    2007-02-01

    Purpose: The purpose of this study is to evaluate the influence of photon propagation on the NIR spectral features associated with photoacoustic imaging. Introduction: Photoacoustic CT spectroscopy (PCT-S) has the potential to identify molecular properties of tumors while overcoming the limited depth resolution associated with optical imaging modalities (e.g., OCT and DOT). Photoacoustics is based on the fact that biological tissue generates high-frequency acoustic signals due to volume of expansion when irradiated by pulsed light. The amplitude of the acoustic signal is proportional to the optical absorption properties of tissue, which varies with wavelength depending on the molecular makeup of the tissue. To obtain quantifiable information necessitate modeling and correcting for photon and acoustic propagation in tumors. Material and Methods: A Monte Carlo (MC) algorithm based on MCML (Monte Carlo for Multi-Layered edia) has been developed to simulate photon propagation within objects comprised of a series of complex 3D surfaces (Mcml3D). This code has been used to simulate and correct for the optical attenuation of photons in blood, and for subcutaneous tumors with homogenous and radially heterogeneous vascular distributions. Results: The NIR spectra for oxygenated and deoxygenated blood as determined from Monte Carlo simulated photoacoustic data matched measured data, and improving oxygen saturation calculations. Subcutaneous tumors with a homogeneous and radially heterogeneous distribution of blood revealed large variations in photon absorption as a function of the scanner projection angle. For select voxels near the periphery of the tumor, this angular profile between the two different tumors appeared similar. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in blood phantoms and restoring the integrity of the NIR spectra associated with oxygenated and deoxygenated blood. This code can be used to simulate the influence of intra-tumor heterogeneity on the molecular identification via NIR spectroscopy.

  15. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  16. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation

    PubMed Central

    Ding, Chizhu; Shi, Shuning; Chen, Jianjun; Wei, Wei; Tan, Zuojun

    2015-01-01

    The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC) method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements. PMID:26469695

  17. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  18. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  19. bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport

    NASA Astrophysics Data System (ADS)

    Ryan, B. R.; Dolence, J. C.; Gammie, C. F.

    2015-07-01

    We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.

  20. A Monte Carlo method for calculating the energy response of plastic scintillators to polarized photons below 100 keV

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Kanai, Y.; Kataoka, J.; Kiss, M.; Kurita, K.; Pearce, M.; Tajima, H.; Takahashi, H.; Tanaka, T.; Ueno, M.; Umeki, Y.; Yoshida, H.; Arimoto, M.; Axelsson, M.; Marini Bettolo, C.; Bogaert, G.; Chen, P.; Craig, W.; Fukazawa, Y.; Gunji, S.; Kamae, T.; Katsuta, J.; Kawai, N.; Kishimoto, S.; Klamra, W.; Larsson, S.; Madejski, G.; Ng, J. S. T.; Ryde, F.; Rydström, S.; Takahashi, T.; Thurston, T. S.; Varner, G.

    2009-03-01

    The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100 keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20 cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by light transportation processes inside the scintillator, as well as the generation and multiplication of photoelectrons in the photomultiplier tube, were studied experimentally and have also been taken into account. A Monte Carlo simulation based on the Geant4 toolkit was used to model photon interactions in the scintillators. When using the polarized Compton/Rayleigh scattering processes previously corrected by the authors, scintillator spectra and angular distributions of scattered polarized photons could clearly be reproduced, in agreement with the results obtained at a synchrotron beam test conducted at the KEK Photon Factory. Our simulation successfully reproduces the modulation factor, defined as the ratio of the amplitude to the mean of the distribution of the azimuthal scattering angles, within ˜5% (relative). Although primarily developed for the PoGOLite mission, the method presented here is also relevant for other missions aiming to measure polarization from astronomical objects using plastic scintillator scatterers.

  1. Comparing gold nano-particle enhanced radiotherapy with protons, megavoltage photons and kilovoltage photons: a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan

    2014-12-01

    Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6?MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10?nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10??m away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.

  2. Achieving nonreciprocal unidirectional single-photon quantum transport using the photonic Aharonov-Bohm effect.

    PubMed

    Yuan, Luqi; Xu, Shanshan; Fan, Shanhui

    2015-11-15

    We show that nonreciprocal unidirectional single-photon quantum transport can be achieved with the photonic Aharonov-Bohm effect. The system consists of a 1D waveguide coupling to two three-level atoms of the V-type. The two atoms, in addition, are each driven by an external coherent field. We show that the phase of the external coherent field provides a gauge potential for the photon states. With a proper choice of the phase difference between the two coherent fields, the transport of a single photon can exhibit unity contrast in its transmissions for the two propagation directions. PMID:26565819

  3. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems

  4. Robust light transport in non-Hermitian photonic lattices

    E-print Network

    Longhi, Stefano; Della Valle, Giuseppe

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode...

  5. Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

    NASA Astrophysics Data System (ADS)

    Malladi, Mayank

    Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.

  6. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study.

    PubMed

    Berg, Eric; Roncali, Emilie; Cherry, Simon R

    2015-06-01

    Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040

  7. Optimizing light transport in scintillation crystals for time-of-flight PET: an experimental and optical Monte Carlo simulation study

    PubMed Central

    Berg, Eric; Roncali, Emilie; Cherry, Simon R.

    2015-01-01

    Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040

  8. Dissipationless electron transport in photon-dressed nanostructures.

    PubMed

    Kibis, O V

    2011-09-01

    It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519

  9. Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling

    SciTech Connect

    Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.

    2008-01-14

    Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1]. MCNP5 is used to calculate sensor pulse-height tallies. RADSAT methods, including adaptive, problem-specific energy-group creation, ray-effect mitigation strategies and the porting of deterministic angular flux to MCNP for individual particle creation are described in [2][3][4]. This paper discusses the application of RADSAT to the modeling of gamma-ray spectrometers in RPMs.

  10. Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.

    PubMed

    Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats

    2015-09-01

    For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter ?(?), which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519

  11. Electron transport through a quantum dot assisted by cavity photons

    NASA Astrophysics Data System (ADS)

    Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

    2013-11-01

    We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, ±Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry.

  12. Electron transport through a quantum dot assisted by cavity photons.

    PubMed

    Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

    2013-11-20

    We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, ±Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry. PMID:24132041

  13. Status of the MORSE multigroup Monte Carlo radiation transport code

    SciTech Connect

    Emmett, M.B.

    1993-06-01

    There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

  14. PERFORMANCE MEASUREMENT OF MONTE CARLO PHOTON TRANSPORT ON PARALLEL MACHINES

    E-print Network

    Majumdar, Amit

    of the algorithm were developed. The first version is for the Tera Multi-Threaded Architecture (MTA) and uses Tera architectures. The different parallel architectures targeted are the shared memory TERA MTA, the distributed memory Cray T3E and the 8-way SMP IBM SP with Power3 processors. For the Tera MTA, directives specific

  15. Efficient simulation of multidimensional phonon transport using energy-based variance-reduced Monte Carlo formulations

    E-print Network

    Peraud, Jean-Philippe Michel

    We present a Monte Carlo method for obtaining solutions of the Boltzmann equation to describe phonon transport in micro- and nanoscale devices. The proposed method can resolve arbitrarily small signals (e.g., temperature ...

  16. Domain decomposition for Monte Carlo particle transport simulations of nuclear reactors

    E-print Network

    Horelik, Nicholas E. (Nicholas Edward)

    2015-01-01

    Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards ...

  17. Computational methods for efficient nuclear data management in Monte Carlo neutron transport simulations

    E-print Network

    Walsh, Jonathan A. (Jonathan Alan)

    2014-01-01

    This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...

  18. Soft Photons from transport and hydrodynamics at FAIR energies

    E-print Network

    Andreas Grimm; Bjørn Bäuchle

    2012-11-11

    Direct photon spectra from uranium-uranium collisions at FAIR energies (E(lab) = 35 AGeV) are calculated within the hadronic Ultra-relativistic Quantum Molecular Dynamics transport model. In this microscopic model, one can optionally include a macroscopic intermediate hydrodynamic phase. The hot and dense stage of the collision is then modeled by a hydrodynamical calculation. Photon emission from transport-hydro hybrid calculations is examined for purely hadronic matter and matter that has a cross-over phase transition and a critical end point to deconfined and chirally restored matter at high temperatures. We find the photon spectra in both scenarios to be dominated by Bremsstrahlung. Comparing flow of photons in both cases suggests a way to distinguish these two scenarios.

  19. Robust light transport in non-Hermitian photonic lattices.

    PubMed

    Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932

  20. Robust light transport in non-Hermitian photonic lattices

    NASA Astrophysics Data System (ADS)

    Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della

    2015-08-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.

  1. Robust light transport in non-Hermitian photonic lattices

    PubMed Central

    Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932

  2. Photonic quantum transport in a nonlinear optical fiber

    E-print Network

    Hafezi, Mohammad; Gritsev, Vladimir; Demler, Eugene; Lukin, Mikhail D

    2009-01-01

    We theoretically study the transmission of few-photon quantum fields through a strongly nonlinear optical medium. We develop a general approach to investigate non-equilibrium quantum transport of bosonic fields through a finite-size nonlinear medium and apply it to a recently demonstrated experimental system where cold atoms are loaded in a hollow-core optical fiber. We show that when the interaction between photons is effectively repulsive, the system acts as a single-photon switch. In the case of attractive interaction, the system can exhibit either anti-bunching or bunching, associated with the resonant excitation of bound states of photons by the input field. These effects can be observed by probing statistics of photons transmitted through the nonlinear fiber.

  3. Photonic quantum transport in a nonlinear optical fiber

    E-print Network

    Mohammad Hafezi; Darrick E. Chang; Vladimir Gritsev; Eugene Demler; Mikhail D. Lukin

    2009-11-26

    We theoretically study the transmission of few-photon quantum fields through a strongly nonlinear optical medium. We develop a general approach to investigate non-equilibrium quantum transport of bosonic fields through a finite-size nonlinear medium and apply it to a recently demonstrated experimental system where cold atoms are loaded in a hollow-core optical fiber. We show that when the interaction between photons is effectively repulsive, the system acts as a single-photon switch. In the case of attractive interaction, the system can exhibit either anti-bunching or bunching, associated with the resonant excitation of bound states of photons by the input field. These effects can be observed by probing statistics of photons transmitted through the nonlinear fiber.

  4. Detector-selection technique for Monte Carlo transport in azimuthally symmetric geometries

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.

    1982-01-01

    Many radiation transport problems contain geometric symmetries which are not exploited in obtaining their Monte Carlo solutions. An important class of problems is that in which the geometry is symmetric about an axis. These problems arise in the analyses of a reactor core or shield, spent fuel shipping casks, tanks containing radioactive solutions, radiation transport in the atmosphere (air-over-ground problems), etc. Although amenable to deterministic solution, such problems can often be solved more efficiently and accurately with the Monte Carlo method. For this class of problems, a technique is described in this paper which significantly reduces the variance of the Monte Carlo-calculated effect of interest at point detectors.

  5. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  6. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

    NASA Astrophysics Data System (ADS)

    Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

    2014-03-01

    Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

  7. Determination of peripheral underdosage at the lung-tumor interface using Monte Carlo radiation transport calculations

    SciTech Connect

    Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick

    2012-04-01

    Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.

  8. Robust light transport in non-Hermitian photonic lattices

    E-print Network

    Stefano Longhi; Davide Gatti; Giuseppe Della Valle

    2015-07-24

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the {\\it non-Hermitian} nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport that is rather insensitive to disorder or imperfections in the structure. Non-Hermitian transport in two lattice models is considered: a tight-binding lattice with an imaginary gauge field (Hatano-Nelson model), and a non-Hermitian driven binary lattice. In the former case transport in spite of disorder is ensured by a mobility edge that arises because of a non-Hermitian delocalization transition. The possibility to observe non-Hermitian delocalization induced by a synthetic 'imaginary' gauge field is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.

  9. Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.

    PubMed

    Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B

    2014-11-01

    The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and ?-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The ?-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967

  10. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units.

    PubMed

    Fang, Qianqian; Boas, David A

    2009-10-26

    We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging. PMID:19997242

  11. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    SciTech Connect

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).

  12. The difference of scoring dose to water or tissues in Monte Carlo dose calculations for low energy brachytherapy photon sources

    SciTech Connect

    Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank

    2011-03-15

    Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.

  13. Utilization of a Photon Transport Code to Investigate Radiation Therapy Treatment Planning Quantities and Techniques.

    NASA Astrophysics Data System (ADS)

    Palta, Jatinder Raj

    A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.

  14. Effect of transverse magnetic fields on dose distribution and RBE of photon beams: comparing PENELOPE and EGS4 Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.

    2008-09-01

    The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.

  15. The role of plasma evolution and photon transport in optimizing future advanced lithography sources

    E-print Network

    Harilal, S. S.

    The role of plasma evolution and photon transport in optimizing future advanced lithography sources and plasma, ioniza- tion, plasma radiation, and details of photon transport in these media. We studied, photons generation, and their transport and distribution. One of the most important processes

  16. Photon-Inhibited Topological Transport in Quantum Well Heterostructures

    NASA Astrophysics Data System (ADS)

    Farrell, Aaron; Pereg-Barnea, T.

    2015-09-01

    Here we provide a picture of transport in quantum well heterostructures with a periodic driving field in terms of a probabilistic occupation of the topologically protected edge states in the system. This is done by generalizing methods from the field of photon-assisted tunneling. We show that the time dependent field dresses the underlying Hamiltonian of the heterostructure and splits the system into sidebands. Each of these sidebands is occupied with a certain probability which depends on the drive frequency and strength. This leads to a reduction in the topological transport signatures of the system because of the probability to absorb or emit a photon. Therefore when the voltage is tuned to the bulk gap the conductance is smaller than the expected 2 e2/h . We refer to this as photon-inhibited topological transport. Nevertheless, the edge modes reveal their topological origin in the robustness of the edge conductance to disorder and changes in model parameters. In this work the analogy with photon-assisted tunneling allows us to interpret the calculated conductivity and explain the sum rule observed by Kundu and Seradjeh.

  17. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    SciTech Connect

    Iandola, F N; O'Brien, M J; Procassini, R J

    2010-11-29

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  18. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

  19. A Monte Carlo simulation for predicting photon return from sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Kibblewhite, Edward; Jin, Kai; Xue, Suijian; Shen, Zhixia; Bo, Yong; Zuo, Junwei; Wei, Kai

    2015-10-01

    Sodium laser guide star is an ideal source for astronomical adaptive optics system correcting wave-front aberration caused by atmospheric turbulence. However, the cost and difficulties to manufacture a compact high quality sodium laser with power higher than 20W is not a guarantee that the laser will provide a bright enough laser guide star due to the physics of sodium atom in the atmosphere. It would be helpful if a prediction tool could provide the estimation of photon generating performance for arbitrary laser output formats, before an actual laser were designed. Based on rate equation, we developed a Monte Carlo simulation software that could be used to predict sodium laser guide star generating performance for arbitrary laser formats. In this paper, we will describe the model of our simulation, its implementation and present comparison results with field test data.

  20. LDRD project 151362 : low energy electron-photon transport.

    SciTech Connect

    Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James

    2013-09-01

    At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.

  1. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

    SciTech Connect

    Miller, S.G.

    1988-08-01

    Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

  2. The theoretical development of a new high speed solution for Monte Carlo radiation transport computations 

    E-print Network

    Pasciak, Alexander Samuel

    2007-04-25

    . In this thesis, a simple radiation transport problem involving moderate energy photons incident on a three-dimensional target is discussed. By comparing the theoretical evaluation speed of this transport problem on a large FPGA to the evaluation speed of the same...

  3. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  4. Dynamic Monte-Carlo modeling of hydrogen isotope reactivediffusive transport in porous graphite

    E-print Network

    Nordlund, Kai

    Dynamic Monte-Carlo modeling of hydrogen isotope reactive­diffusive transport in porous graphite R in a fusion reactor. It is important to study the recycling and mixing of these hydrogen isotopes in graphite) continued use of graphite as a first wall and divertor material, and (iii) reaction with carbon atoms

  5. Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation 

    E-print Network

    Peterson, Jacob Ross

    2014-04-23

    An exponentially-convergent Monte Carlo (ECMC) method is analyzed using the one-group, one-dimension, slab-geometry transport equation. The method is based upon the use of a linear discontinuous finite-element trial space in position and direction...

  6. Monte Carlo particle simulation and finite-element techniques for tandem mirror transport

    SciTech Connect

    Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.

    1985-12-01

    A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. 11 refs.

  7. Dynamic Monte Carlo simulation of coupled transport through a narrow multiply-occupied pore

    E-print Network

    Dezs? Boda; Éva Csányi; Dirk Gillespie; Tamás Kristóf

    2013-11-25

    Dynamic Monte Carlo simulations are used to study coupled transport (co-transport) through sub-nanometer-diameter pores. In this classic Hodgkin-Keynes mechanism, an ion species uses the large flux of an abundant ion species to move against its concentration gradient. The efficiency of co-transport is examined for various pore parameters so that synthetic nanopores can be engineered to maximize this effect. In general, the pore must be narrow enough that ions cannot pass each other and the charge of the pore large enough to attract many ions so that they exchange momentum. Co-transport efficiency increases as pore length increases, but even very short pores exhibit co-transport, in contradiction to the usual perception that long pores are necessary. The parameter ranges where co-transport occurs is consistent with current and near-future synthetic nanopore geometry parameters, suggesting that co-transport of ions may be a new application of nanopores.

  8. Update On the Status of the FLUKA Monte Carlo Transport Code*

    NASA Technical Reports Server (NTRS)

    Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.

    2006-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.

  9. Transport of Neutrons and Photons Through Iron and Water Layers

    NASA Astrophysics Data System (ADS)

    Koš?ál, Michal; Cvachovec, František; Ošmera, Bohumil; Hansen, Wolfgang; Noack, Klaus

    2009-08-01

    The neutron and photon spectra were measured after iron and water plates placed at the horizontal channel of the Dresden University reactor AK-2. The measurements have been performed with the multiparameter spectrometer [1] with a stilbene cylindrical crystal, 10 × 10 mm or 45 × 45 mm; the neutron and photon spectra have been measured simultaneously. The calculations were performed with the MCNP code and nuclear data libraries ENDF/B VI.2, ENDF/BVII.0, JENDL 3.3 and JEFF 3.1. The measured channel leakage spectrum was used as the input spectrum for the transport calculation. Photons, the primary photons from the reactor - as well as the ones induced by neutron interaction - were calculated. The comparison of the measurements and calculations through 10 cm of iron and 20 cm thickness of water are presented. Besides that, the attenuation of the radiation mixed field by iron layers from 5 to 30 cm is presented; the measured and calculated data are compared.

  10. MC-PEPTITA: A Monte Carlo model for Photon, Electron and Positron Tracking In Terrestrial Atmosphere—Application for a terrestrial gamma ray flash

    NASA Astrophysics Data System (ADS)

    Sarria, D.; Blelly, P.-L.; Forme, F.

    2015-05-01

    Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.

  11. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

  12. Monte Carlo path sampling approach to modeling aeolian sediment transport

    NASA Astrophysics Data System (ADS)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

  13. A Comparison of Monte Carlo Particle Transport Algorithms for Binary Stochastic Mixtures

    SciTech Connect

    Brantley, P S

    2009-02-23

    Two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through a binary stochastic mixture are numerically compared using a standard set of planar geometry benchmark problems. In addition to previously-published comparisons of the ensemble-averaged probabilities of reflection and transmission, we include comparisons of detailed ensemble-averaged total and material scalar flux distributions. Because not all benchmark scalar flux distribution data used to produce plots in previous publications remains available, we have independently regenerated the benchmark solutions including scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically-realistic scalar flux distributions for the transport problems examined. The first algorithm reproduces the standard Levermore-Pomraning model results for the probabilities of reflection and transmission. The second algorithm generally produces significantly more accurate probabilities of reflection and transmission and also significantly more accurate total and material scalar flux distributions.

  14. Time series analysis of Monte Carlo neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Nease, Brian Robert

    A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

  15. MC21 v.6.0 - A Continuous-Energy Monte Carlo Particle Transport Code with Integrated Reactor Feedback Capabilities

    NASA Astrophysics Data System (ADS)

    Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

    2014-06-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

  16. Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

    SciTech Connect

    Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

    2008-10-31

    Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

  17. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms.

    PubMed

    Bouchard, Hugo; Bielajew, Alex

    2015-07-01

    To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045

  18. Monte Carlo linear accelerator simulation of megavoltage photon beams: Independent determination of initial beam parameters

    SciTech Connect

    Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore

    2012-01-15

    Purpose: To individually benchmark the incident electron parameters in a Monte Carlo model of an Elekta linear accelerator operating at 6 and 15 MV. The main objective is to establish a simplified but still precise benchmarking procedure that allows accurate dose calculations of advanced treatment techniques. Methods: The EGSnrc Monte Carlo user codes BEAMnrc and DOSXYZnrc are used for photon beam simulations and dose calculations, respectively. A 5 x 5 cm{sup 2} field is used to determine both the incident electron energy and the electron radial intensity. First, the electron energy is adjusted to match the calculated depth dose to the measured one. Second, the electron radial intensity is adjusted to make the calculated dose profile in the penumbrae region match the penumbrae measured by GafChromic EBT film. Finally, the mean angular spread of the incident electron beam is determined by matching calculated and measured cross-field profiles of large fields. The beam parameters are verified for various field sizes and shapes. Results: The penumbrae measurements revealed a non-circular electron radial intensity distribution for the 6 MV beam, while a circular electron radial intensity distribution could best describe the 15 MV beam. These electron radial intensity distributions, given as the standard deviation of a Gaussian distribution, were found to be 0.25 mm (in-plane) and 1.0 mm (cross-plane) for the 6 MV beam and 0.5 mm (both in-plane and cross-plane) for the 15 MV beam. Introducing a small mean angular spread of the incident electron beam has a considerable impact on the lateral dose profiles of large fields. The mean angular spread was found to be 0.7 deg. and 0.5 deg. for the 6 and 15 MV beams, respectively. Conclusions: The incident electron beam parameters in a Monte Carlo model of a linear accelerator could be precisely and independently determined by the benchmarking procedure proposed. As the dose distribution in the penumbra region is insensitive to moderate changes in electron energy and angular spread, accurate penumbra measurements is feasible for benchmarking the electron radial intensity distribution. This parameter is particularly important for accurate dosimetry of mlc-shaped fields and small fields.

  19. Data decomposition of Monte Carlo particle transport simulations via tally servers

    SciTech Connect

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord

    2013-11-01

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

  20. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901

  1. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  2. Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin

    2010-03-01

    Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.

  3. Minimizing the cost of splitting in Monte Carlo radiation transport simulation

    SciTech Connect

    Juzaitis, R.J.

    1980-10-01

    A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

  4. A Monte Carlo model for out-of-field dose calculation from high-energy photon therapy

    SciTech Connect

    Kry, Stephen F.; Titt, Uwe; Followill, David; Poenisch, Falk; Vassiliev, Oleg N.; White, R. Allen; Stovall, Marilyn; Salehpour, Mohammad

    2007-09-15

    As cancer therapy becomes more efficacious and patients survive longer, the potential for late effects increases, including effects induced by radiation dose delivered away from the treatment site. This out-of-field radiation is of particular concern with high-energy radiotherapy, as neutrons are produced in the accelerator head. We recently developed an accurate Monte Carlo model of a Varian 2100 accelerator using MCNPX for calculating the dose away from the treatment field resulting from low-energy therapy. In this study, we expanded and validated our Monte Carlo model for high-energy (18 MV) photon therapy, including both photons and neutrons. Simulated out-of-field photon doses were compared with measurements made with thermoluminescent dosimeters in an acrylic phantom up to 55 cm from the central axis. Simulated neutron fluences and energy spectra were compared with measurements using moderated gold foil activation in moderators and data from the literature. The average local difference between the calculated and measured photon dose was 17%, including doses as low as 0.01% of the central axis dose. The out-of-field photon dose varied substantially with field size and distance from the edge of the field but varied little with depth in the phantom, except at depths shallower than 3 cm, where the dose sharply increased. On average, the difference between the simulated and measured neutron fluences was 19% and good agreement was observed with the neutron spectra. The neutron dose equivalent varied little with field size or distance from the central axis but decreased with depth in the phantom. Neutrons were the dominant component of the out-of-field dose equivalent for shallow depths and large distances from the edge of the treatment field. This Monte Carlo model is useful to both physicists and clinicians when evaluating out-of-field doses and associated potential risks.

  5. MONTE CARLO PARTICLE TRANSPORT IN MEDIA WITH EXPONENTIALLY VARYING TIME-DEPENDENT CROSS-SECTIONS

    SciTech Connect

    F. BROWN; W. MARTIN

    2001-02-01

    A probability density function (PDF) and random sampling procedure for the distance to collision were derived for the case of exponentially varying cross-sections. Numerical testing indicates that both are correct. This new sampling procedure has direct application in a new method for Monte Carlo radiation transport, and may be generally useful for analyzing physical problems where the material cross-sections change very rapidly in an exponential manner.

  6. A computationally efficient moment-preserving Monte Carlo electron transport method with implementation in Geant4

    NASA Astrophysics Data System (ADS)

    Dixon, D. A.; Prinja, A. K.; Franke, B. C.

    2015-09-01

    This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.

  7. Exponentially-convergent Monte Carlo for the 1-D transport equation

    SciTech Connect

    Peterson, J. R.; Morel, J. E.; Ragusa, J. C.

    2013-07-01

    We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

  8. Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.

    2003-01-01

    Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.

  9. Dosimetric dependences of bone heterogeneity and beam angle on the unflattened and flattened photon beams: A Monte Carlo comparison

    NASA Astrophysics Data System (ADS)

    Chow, James C. L.; Owrangi, Amir M.

    2014-08-01

    The variations of depth and surface dose on the bone heterogeneity and beam angle were compared between unflattened and flattened photon beams using Monte Carlo simulations. Phase-space files of the 6 MV photon beams with field size of 10×10 cm2 were generated with and without the flattening filter based on a Varian TrueBeam linac. Depth and surface doses were calculated in a bone and water phantoms using Monte Carlo simulations (the EGSnrc-based code). Dose calculations were repeated with angles of the unflattened and flattened beams turned from 0° to 15°, 30°, 45°, 60°, 75° and 90° in the bone and water phantoms. Monte Carlo results of depth doses showed that compared to the flattened beam the unflattened photon beam had a higher dose in the build-up region but lower dose beyond the depth of maximum dose. Dose ratios of the unflattened to flattened beams were calculated in the range of 1.6-2.6 with beam angle varying from 0° to 90° in water. Similar results were found in the bone phantom. In addition, higher surface doses of about 2.5 times were found with beam angles equal to 0° and 15° in the bone and water phantoms. However, surface dose deviation between the unflattened and flattened beams became smaller with increasing beam angle. Dose enhancements due to the bone backscatter were also found at the water-bone and bone-water interfaces for both the unflattened and flattened beams in the bone phantom. With Monte Carlo beams cross-calibrated to the monitor unit in simulations, variations of depth and surface dose on the bone heterogeneity and beam angle were investigated and compared using Monte Carlo simulations. For the unflattened and flattened photon beams, the surface dose and range of depth dose ratios (unflattened to flattened beam) decreased with increasing beam angle. The dosimetric comparison in this study is useful in understanding the characteristics of unflattened photon beam on the depth and surface dose with bone heterogeneity.

  10. Monte Carlo study of photon beams from medical linear accelerators: Optimization, benchmark and spectra

    NASA Astrophysics Data System (ADS)

    Sheikh-Bagheri, Daryoush

    1999-12-01

    BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.

  11. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    SciTech Connect

    Lee, S.R.; Cummings, J.C.; Nolen, S.D. |

    1997-05-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.

  12. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors ?(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size. PMID:23514734

  13. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards

    NASA Astrophysics Data System (ADS)

    Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

    2014-06-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.

  14. Light transport and lasing in complex photonic structures

    NASA Astrophysics Data System (ADS)

    Liew, Seng Fatt

    Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.

  15. Neutron-induced photon production in MCNP

    SciTech Connect

    Little, R.C.; Seamon, R.E.

    1983-01-01

    An improved method of neutron-induced photon production has been incorporated into the Monte Carlo transport code MCNP. The new method makes use of all partial photon-production reaction data provided by ENDF/B evaluators including photon-production cross sections as well as energy and angular distributions of secondary photons. This faithful utilization of sophisticated ENDF/B evaluations allows more precise MCNP calculations for several classes of coupled neutron-photon problems.

  16. A bone composition model for Monte Carlo x-ray transport simulations.

    PubMed

    Zhou, Hu; Keall, Paul J; Graves, Edward E

    2009-03-01

    In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones. PMID:19378761

  17. A bone composition model for Monte Carlo x-ray transport simulations

    SciTech Connect

    Zhou Hu; Keall, Paul J.; Graves, Edward E.

    2009-03-15

    In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.

  18. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663

  19. Cavity-photon-switched coherent transient transport in a double quantum waveguide

    SciTech Connect

    Abdullah, Nzar Rauf Gudmundsson, Vidar; Tang, Chi-Shung; Manolescu, Andrei

    2014-12-21

    We study a cavity-photon-switched coherent electron transport in a symmetric double quantum waveguide. The waveguide system is weakly connected to two electron reservoirs, but strongly coupled to a single quantized photon cavity mode. A coupling window is placed between the waveguides to allow electron interference or inter-waveguide transport. The transient electron transport in the system is investigated using a quantum master equation. We present a cavity-photon tunable semiconductor quantum waveguide implementation of an inverter quantum gate, in which the output of the waveguide system may be selected via the selection of an appropriate photon number or “photon frequency” of the cavity. In addition, the importance of the photon polarization in the cavity, that is, either parallel or perpendicular to the direction of electron propagation in the waveguide system is demonstrated.

  20. Selection of voxel size and photon number in voxel-based Monte Carlo method: criteria and applications.

    PubMed

    Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan

    2015-09-01

    The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions. PMID:26417866

  1. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

    PubMed

    Jabbari, Keyvan; Seuntjens, Jan

    2014-07-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

  2. Simulation and Analysis of a Tissue Equivalent Proportional Counter Using the Monte Carlo Transport Code FLUKA 

    E-print Network

    Northum, Jeremy Dell

    2011-08-08

    of A-150 tissue-equivalent plastic Atomic number Fraction by weight 1 0.101327 6 0.775501 7 0.035057 8 0.052316 9 0.017422 20 0.018378 16 All FLUKA simulations were run with the TEPC in a vacuum. Gersey did not give specifications... AND ANALYSIS OF A TISSUE EQUIVALENT PROPORTIONAL COUNTER USING THE MONTE CARLO TRANSPORT CODE FLUKA A Thesis by JEREMY DELL NORTHUM Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements...

  3. Dosimetric variation due to the photon beam energy in the small-animal irradiation: A Monte Carlo study

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.

    2010-10-15

    Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.

  4. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Verhaegen, F.; Symonds-Tayler, R.; Liu, H. H.; Nahum, A. E.

    2000-11-01

    In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5×0.5 cm2 field compared to a 40×40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur.

  5. Backscatter towards the monitor ion chamber in high-energy photon and electron beams: charge integration versus Monte Carlo simulation.

    PubMed

    Verhaegen, F; Symonds-Tayler, R; Liu, H H; Nahum, A E

    2000-11-01

    In some linear accelerators, the charge collected by the monitor ion chamber is partly caused by backscattered particles from accelerator components downstream from the chamber. This influences the output of the accelerator and also has to be taken into account when output factors are derived from Monte Carlo simulations. In this work, the contribution of backscattered particles to the monitor ion chamber response of a Varian 2100C linac was determined for photon beams (6, 10 MV) and for electron beams (6, 12, 20 MeV). The experimental procedure consisted of charge integration from the target in a photon beam or from the monitor ion chamber in electron beams. The Monte Carlo code EGS4/BEAM was used to study the contribution of backscattered particles to the dose deposited in the monitor ion chamber. Both measurements and simulations showed a linear increase in backscatter fraction with decreasing field size for photon and electron beams. For 6 MV and 10 MV photon beams, a 2-3% increase in backscatter was obtained for a 0.5 x 0.5 cm2 field compared to a 40 x 40 cm2 field. The results for the 6 MV beam were slightly higher than for the 10 MV beam. For electron beams (6, 12, 20 MeV), an increase of similar magnitude was obtained from measurements and simulations for 6 MeV electrons. For higher energy electron beams a smaller increase in backscatter fraction was found. The problem is of less importance for electron beams since large variations of field size for a single electron energy usually do not occur. PMID:11098896

  6. Monte Carlo Calculation of Slow Electron Beam Transport in Solids:. Reflection Coefficient Theory Implications

    NASA Astrophysics Data System (ADS)

    Bentabet, A.

    The reflection coefficient theory developed by Vicanek and Urbassek showed that the backscattering coefficient of light ions impinging on semi-infinite solid targets is strongly related to the range and the first transport cross-section as well. In this work and in the electron case, we show that not only the backscattering coefficient is, but also most of electron transport quantities (such as the mean penetration depth, the diffusion polar angles, the final backscattering energy, etc.), are strongly correlated to both these quantities (i.e. the range and the first transport cross-section). In addition, most of the electron transport quantities are weakly correlated to the distribution of the scattering angle and the total elastic cross-section as well. To make our study as straightforward and clear as possible, we have projected different input data of elastic cross-sections and ranges in our Monte Carlo code to study the mean penetration depth and the backscattering coefficient of slow electrons impinging on semi-infinite aluminum and gold in the energy range up to 10 keV. The possibility of extending the present study to other materials and other transport quantities using the same models is a valid process.

  7. Frequency domain photon migration in the delta- P1 approximation: analysis of ballistic, transport, and diffuse regimes.

    PubMed

    You, J S; Hayakawa, C K; Venugopalan, V

    2005-08-01

    The standard diffusion approximation (SDA) to the Boltzmann transport equation (BTE) is commonly used to describe radiative transport for biomedical applications of frequency-domain diffuse optical imaging and spectroscopy. Unfortunately, the SDA is unable to provide accurate radiative transport predictions on spatial scales comparable to the transport mean free path and for media in which optical scattering is not dominant over absorption. Here, we develop and demonstrate the use of the delta- P1 approximation to provide improved radiative transport estimates in the frequency domain via the addition of a Dirac delta function to both radiance and phase function approximations. Specifically, we consider photon density wave propagation resulting from the illumination of an infinite turbid medium with an embedded, intensity-modulated, spherical light source. We examine the accuracy of the standard diffusion and delta- P1 approximations relative to Monte Carlo simulations that provide exact solutions to the BTE. This comparison establishes the superior accuracy of the delta- P1 approximation relative to the SDA that is most notable at distances less than 3 transport mean free paths from the source. In addition, we demonstrate that the differences in photon density wave propagation in a highly forward scattering medium (g1=0.95) vs an isotropically scattering medium (g1=0) provides a basis to define three spatial regimes where the light field is dominated by (a) unscattered/ballistic light, (b) minimally scattered light, and (c) diffusely scattered light. We examine the impact of optical properties, source modulation frequency, and numerical aperture of detection on the spatial extent and location of these regimes. PMID:16196600

  8. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E. (Livermore, CA)

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  9. Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers

    E-print Network

    Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian

    2015-01-01

    We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...

  10. Comparison of generalized transport and Monte-Carlo models of the escape of a minor species

    NASA Technical Reports Server (NTRS)

    Demars, H. G.; Barakat, A. R.; Schunk, R. W.

    1993-01-01

    The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.

  11. Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC

    SciTech Connect

    Sun, C. C.; You, A. H.; Wong, E. K.

    2010-07-07

    The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.

  12. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, M J; Brantley, P S

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  13. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William

    2011-01-01

    A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.

  14. Generalized Particle Concept for Adjoint Monte Carlo Calculations of Coupled Gamma-Ray-Electron-Positron Transport

    SciTech Connect

    Borisov, N.M.; Panin, M.P.

    2005-07-15

    Adjoint Monte Carlo methods for coupled transport are developed. The phase-space is extended by the introduction of an additional discrete coordinate (particle type of so-called generalized particle). The generalized particle concept allows the treatment of the transport of mixed radiation as a process with only one particle outgoing from a collision regardless of the physical picture of the interaction. In addition to the forward equation for the generalized particle, the adjoint equation is also derived. The proposed concept is applied to the adjoint equation of the coupled gamma-ray-electron-positron transport. Charged particle transport is considered in continuous slowing down approximation and Moliere's theory of multiple scattering, for which special adjoint sampling methods are suggested. A new approach to simulation of fixed-energy secondary radiation is implemented into the generalized particle concept. This approach performs fixed-energy secondary radiation simulation as the local energy estimator through the intermediate state with fixed energy. A comparison of forward and adjoint calculations for energy absorption shows the same results for radionuclide energies with and without electron equilibrium. Adjoint methods show greater efficiency in thin slabs.

  15. Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries

    E-print Network

    Guillaume Bal; Ian Langmore

    2011-04-13

    Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).

  16. Multi-layer diffusion approximation for photon transport in biological tissue 

    E-print Network

    Hollmann, Joseph

    2009-06-02

    -LAYER DIFFUSION APPROXIMATION FOR PHOTON TRANSPORT IN BIOLOGICAL TISSUE A Thesis by JOSEPH HOLLMANN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... August 2007 Major Subject: Biomedical Engineering MULTI-LAYER DIFFUSION APPROXIMATION FOR PHOTON TRANSPORT IN BIOLOGICAL TISSUE A Thesis by JOSEPH HOLLMANN Submitted to the Office of Graduate Studies of Texas A...

  17. Analysis of atmospheric gamma-ray flashes detected in near space with allowance for the transport of photons in the atmosphere

    SciTech Connect

    Babich, L. P. Donskoy, E. N.; Kutsyk, I. M.

    2008-07-15

    Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.

  18. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2008-01-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator?detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ?10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552

  19. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  20. Monte Carlo modeling of transport in PbSe nanocrystal films

    SciTech Connect

    Carbone, I. Carter, S. A.; Zimanyi, G. T.

    2013-11-21

    A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.

  1. Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Joy, K I; Procassini, R J; Greenman, G M

    2008-12-07

    Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.

  2. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    SciTech Connect

    Romano, Paul K; Brown, Forrest B; Forget, Benoit

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  3. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  4. Quantifying the number of color centers in single fluorescent nanodiamonds by photon correlation spectroscopy and Monte Carlo simulation

    SciTech Connect

    Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C.; Lim, T.-S.; Fann Wunshain

    2009-01-05

    The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.

  5. On Monte Carlo modeling of megavoltage photon beams: A revisited study on the sensitivity of beam parameters

    SciTech Connect

    Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie

    2011-01-15

    Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).

  6. SAF values for internal photon emitters calculated for the RPI-P pregnant-female models using Monte Carlo methods

    SciTech Connect

    Shi, C. Y.; Xu, X. George; Stabin, Michael G.

    2008-07-15

    Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

  7. Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide

    NASA Astrophysics Data System (ADS)

    Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail

    2015-08-01

    We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.

  8. Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Nanophotonic Waveguide

    E-print Network

    Zeyang Liao; Xiaodong Zeng; Shi-Yao Zhu; M. Suhail Zubairy

    2015-10-07

    We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach $100\\%$. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physical phenomena can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single photon frequency filter, single photon modulation and may find important applications in quantum information.

  9. One-dimensional hopping transport in disordered organic solids. II. Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kohary, K.; Cordes, H.; Baranovskii, S. D.; Thomas, P.; Yamasaki, S.; Hensel, F.; Wendorff, J.-H.

    2001-03-01

    Drift mobility of charge carriers in strongly anisotropic disordered organic media is studied by Monte Carlo computer simulations. Results for the nearest-neighbor hopping are in excellent agreement with those of the analytic theory (Cordes et al., preceding paper). It is widely believed that the low-field drift mobility in disordered organic solids has the form ?~exp[-(T0/T)2] with characteristic temperature T0 depending solely on the scale of the energy distribution of localized states responsible for transport. Taking into account electron transitions to more distant sites than the nearest neighbors, we show that this dependence is not universal and parameter T0 depends also on the concentration of localized states and on the decay length of the electron wave function in localized states. The results of computer simulation evidence that correlations in the distribution of localized states influence essentially not only the field dependence as known from the literature, but also the temperature dependence of the drift mobility. In particular, strong space-energy correlations diminish the role of long-range hopping transitions in the charge carrier transport.

  10. Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons

    SciTech Connect

    Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.

    2014-10-28

    We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.

  11. Pre-conditioned backward Monte Carlo solutions to radiative transport in planetary atmospheres. Fundamentals: Sampling of propagation directions in polarising media

    NASA Astrophysics Data System (ADS)

    García Muñoz, A.; Mills, F. P.

    2015-01-01

    Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72

  12. On-Farm Transport of Ornamental Fish 1 Tina C. Crosby, Jeffrey E. Hill, Carlos V. Martinez, Craig A. Watson, Deborah B. Pouder, and Roy

    E-print Network

    Hill, Jeffrey E.

    FA-119 On-Farm Transport of Ornamental Fish 1 Tina C. Crosby, Jeffrey E. Hill, Carlos V. Martinez Figure 2. A transportation vehicle Credits: Tina Crosby 2004 #12;2On-Farm Transport of Ornamental Fish and transport of fish will affect survival and overall quality of the fish (see UF IFAS Circular 919 Stress

  13. A direction-selective flattening filter for clinical photon beams. Monte Carlo evaluation of a new concept

    NASA Astrophysics Data System (ADS)

    Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

    2011-07-01

    A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).

  14. Output correction factors for nine small field detectors in 6 MV radiation therapy photon beams: A PENELOPE Monte Carlo study

    SciTech Connect

    Benmakhlouf, Hamza; Sempau, Josep; Andreo, Pedro

    2014-04-15

    Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.

  15. A Comparison of Monte Carlo Particle Transport Algorithms for an Interior Source Binary Stochastic Medium Benchmark Suite

    SciTech Connect

    Brantley, P S

    2009-06-30

    Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the Monte Carlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these Monte Carlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the Monte Carlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these Monte Carlo algorithms has been investigated.

  16. Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Kasen, Daniel; O’Connor, Evan; Fernández, Rodrigo; Ott, Christian D.

    2015-11-01

    We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.

  17. Guiding electromagnetic waves around sharp corners: topologically protected photonic transport in meta-waveguides (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Shvets, Gennady B.; Khanikaev, Alexander B.; Ma, Tzuhsuan; Lai, Kueifu

    2015-09-01

    Science thrives on analogies, and a considerable number of inventions and discoveries have been made by pursuing an unexpected connection to a very different field of inquiry. For example, photonic crystals have been referred to as "semiconductors of light" because of the far-reaching analogies between electron propagation in a crystal lattice and light propagation in a periodically modulated photonic environment. However, two aspects of electron behavior, its spin and helicity, escaped emulation by photonic systems until recent invention of photonic topological insulators (PTIs). The impetus for these developments in photonics came from the discovery of topologically nontrivial phases in condensed matter physics enabling edge states immune to scattering. The realization of topologically protected transport in photonics would circumvent a fundamental limitation imposed by the wave equation: inability of reflections-free light propagation along sharply bent pathway. Topologically protected electromagnetic states could be used for transporting photons without any scattering, potentially underpinning new revolutionary concepts in applied science and engineering. I will demonstrate that a PTI can be constructed by applying three types of perturbations: (a) finite bianisotropy, (b) gyromagnetic inclusion breaking the time-reversal (T) symmetry, and (c) asymmetric rods breaking the parity (P) symmetry. We will experimentally demonstrate (i) the existence of the full topological bandgap in a bianisotropic, and (ii) the reflectionless nature of wave propagation along the interface between two PTIs with opposite signs of the bianisotropy.

  18. Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind

    NASA Astrophysics Data System (ADS)

    Barakat, A. R.; Ji, J.; Schunk, R. W.

    2013-12-01

    In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.

  19. Monte Carlo model of neutral-particle transport in diverted plasmas

    SciTech Connect

    Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

    1981-11-01

    The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

  20. Monte Carlo simulation of radiation transport in human skin with rigorous treatment of curved tissue boundaries

    NASA Astrophysics Data System (ADS)

    Majaron, Boris; Milani?, Matija; Premru, Jan

    2015-01-01

    In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.

  1. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    SciTech Connect

    Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.

    2012-07-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  2. Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

    SciTech Connect

    Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W.

    2011-04-15

    Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

  3. Epithelial cancers and photon migration: Monte Carlo simulations and diffuse reflectance measurements

    NASA Astrophysics Data System (ADS)

    Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David

    2015-07-01

    Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.

  4. MCNP: Photon benchmark problems

    SciTech Connect

    Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

    1991-09-01

    The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.

  5. Parallel domain decomposition methods in fluid models with Monte Carlo transport

    SciTech Connect

    Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.

    1996-12-01

    To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.

  6. Kinetic Monte Carlo Model of Charge Transport in Hematite (?-Fe2O3)

    SciTech Connect

    Kerisit, Sebastien N.; Rosso, Kevin M.

    2007-09-28

    The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of repulsive defects has a linear effect on the electron polaron diffusion. Attractive defects are found to significantly affect electron polaron diffusion at low polaron to defect ratios due to trapping on nanosecond to microsecond time scales. This work indicates that electrons can diffuse away from the initial site of interfacial electron transfer at a rate that is consistent with measured electrical conductivities but that the presence of certain kinds of defects will severely limit the mobility of donated electrons.

  7. On the Monte Carlo simulation of small-field micro-diamond detectors for megavoltage photon dosimetry.

    PubMed

    Andreo, Pedro; Palmans, Hugo; Marteinsdóttir, Maria; Benmakhlouf, Hamza; Carlsson-Tedgren, Åsa

    2016-01-01

    Monte Carlo (MC) calculated detector-specific output correction factors for small photon beam dosimetry are commonly used in clinical practice. The technique, with a geometry description based on manufacturer blueprints, offers certain advantages over experimentally determined values but is not free of weaknesses. Independent MC calculations of output correction factors for a PTW-60019 micro-diamond detector were made using the EGSnrc and PENELOPE systems. Compared with published experimental data the MC results showed substantial disagreement for the smallest field size simulated ([Formula: see text] mm). To explain the difference between the two datasets, a detector was imaged with x rays searching for possible anomalies in the detector construction or details not included in the blueprints. A discrepancy between the dimension stated in the blueprints for the active detector area and that estimated from the electrical contact seen in the x-ray image was observed. Calculations were repeated using the estimate of a smaller volume, leading to results in excellent agreement with the experimental data. MC users should become aware of the potential differences between the design blueprints of a detector and its manufacturer production, as they may differ substantially. The constraint is applicable to the simulation of any detector type. Comparison with experimental data should be used to reveal geometrical inconsistencies and details not included in technical drawings, in addition to the well-known QA procedure of detector x-ray imaging. PMID:26630437

  8. Gel dosimetry measurements and Monte Carlo modeling for external radiotherapy photon beams: Comparison with a treatment planning system dose distribution

    NASA Astrophysics Data System (ADS)

    Valente, M.; Aon, E.; Brunetto, M.; Castellano, G.; Gallivanone, F.; Gambarini, G.

    2007-09-01

    Gel dosimetry has proved to be useful to determine absorbed dose distributions in radiotherapy, as well as to validate treatment plans. Gel dosimetry allows dose imaging and is particularly helpful for non-uniform dose distribution measurements, as may occur when multiple-field irradiation techniques are employed. In this work, we report gel-dosimetry measurements and Monte Carlo (PENELOPE ®) calculations for the dose distribution inside a tissue-equivalent phantom exposed to a typical multiple-field irradiation. Irradiations were performed with a 10 MV photon beam from a Varian ® Clinac 18 accelerator. The employed dosimeters consisted of layers of Fricke Xylenol Orange radiochromic gel. The method for absorbed dose imaging was based on analysis of visible light transmittance, usually detected by means of a CCD camera. With the aim of finding a simple method for light transmittance image acquisition, a commercial flatbed-like scanner was employed. The experimental and simulated dose distributions have been compared with those calculated with a commercially available treatment planning system, showing a reasonable agreement.

  9. Study of the response of plastic scintillation detectors in small-field 6 MV photon beams by Monte Carlo simulations

    SciTech Connect

    Wang, Lilie L. W.; Beddar, Sam

    2011-03-15

    Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.

  10. Influence of electrodes on the photon energy deposition in CVD-diamond dosimeters studied with the Monte Carlo code PENELOPE

    NASA Astrophysics Data System (ADS)

    Górka, B.; Nilsson, B.; Fernández-Varea, J. M.; Svensson, R.; Brahme, A.

    2006-08-01

    A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 µm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 µm thick CVD-diamond layer with 0.2 µm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) 60Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect electrode material. This study shows that, for the considered therapeutic beam qualities, the perturbation of the detector signal due to charge-collecting graphite electrodes of thicknesses between 0.1 and 700 µm is negligible within the calculation uncertainty of 0.2%.

  11. Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam

    PubMed Central

    Atarod, Maryam; Shokrani, Parvaneh

    2013-01-01

    Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135

  12. The development of a high speed solution for the evaluation of track structure Monte Carlo electron transport problems using field programmable gate arrays 

    E-print Network

    Pasciak, Alexander Samuel

    2009-05-15

    There are two principal techniques for performing Monte Carlo electron transport computations. The first, and least common, is the full track-structure method. This method individually models all physical electron interactions ...

  13. Monte Carlo simulation of gas Cerenkov detectors

    SciTech Connect

    Mack, J.M.; Jain, M.; Jordan, T.M.

    1984-01-01

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier.

  14. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.

  15. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy.

    PubMed

    Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B

    2014-11-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381

  16. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    NASA Astrophysics Data System (ADS)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael H.; Sobolevsky, Nikolai; Thomsen, Bjarne; Bassler, Niels

    2015-03-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi-Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi-Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  17. Update on the Status of the FLUKA Monte Carlo Transport Code

    NASA Technical Reports Server (NTRS)

    Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.

    2004-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.

  18. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  19. A generalized framework for in-line energy deposition during steady-state Monte Carlo radiation transport

    SciTech Connect

    Griesheimer, D. P.; Stedry, M. H.

    2013-07-01

    A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)

  20. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP ha

  1. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  2. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    SciTech Connect

    Müller, Florian Jenny, Patrick Meyer, Daniel W.

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

  3. Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas

    SciTech Connect

    Hoeoek, J.; Hellsten, T.

    2009-11-26

    Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.

  4. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir Alexander

    Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle differential basis with common cross-section libraries.

  5. SU-E-T-142: Effect of the Bone Heterogeneity On the Unflattened and Flattened Photon Beam Dosimetry: A Monte Carlo Comparison

    SciTech Connect

    Chow, J; Owrangi, A

    2014-06-01

    Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.

  6. Photon Induced Transport in Graphene-Boron Nitride-Graphene Heterostructures

    NASA Astrophysics Data System (ADS)

    Nair, Nityan; Gabor, Nathaniel; Ma, Qiong; Watanabe, Kenji; Taniguchi, Takashi; Fang, Wenjing; Kong, Jing; Jarillo-Herrero, Pablo

    2013-03-01

    Monolayer graphene, an atomically thin sheet of hexagonally oriented carbon, is a zero band gap conductor that exhibits strong electron-electron interactions and broadband optical absorption. By combining MLG and hexagonal boron nitride into ultrathin vertical stacks, experiments have demonstrated improved mobility, Coulomb drag, and field-effect tunneling across few-layer boron nitride barriers. Here, we report on the photon-induced transport of charge carriers through a graphene-boron nitride-graphene heterostructure. The dependence of the generated photocurrent on photon energy and interlayer bias voltage is studied. The photocurrent is found to depend strongly on both these parameters, showing several interesting features. We consider several processes that may serve to explain the rich dependence of photoconductance on applied bias voltage and photon energy.

  7. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  8. Dosimetric advantage of using 6 MV over 15 MV photons in conformal therapy of lung cancer: Monte Carlo studies in patient geometries.

    PubMed

    Wang, Lu; Yorke, Ellen; Desobry, Gregory; Chui, Chen-Shou

    2002-01-01

    Many lung cancer patients who undergo radiation therapy are treated with higher energy photons (15-18 MV) to obtain deeper penetration and better dose uniformity. However, the longer range of the higher energy recoil electrons in the low-density medium may cause lateral electronic disequilibrium and degrade the target coverage. To compare the dose homogeneity achieved with lower versus higher energy photon beams, we performed a dosimetric study of 6 and 15 MV three-dimensional (3D) conformal treatment plans for lung cancer using an accurate, patient-specific dose-calculation method based on a Monte Carlo technique. A 6 and 15 MV 3D conformal treatment plan was generated for each of two patients with target volumes exceeding 200 cm(3) on an in-house treatment planning system in routine clinical use. Each plan employed four conformally shaped photon beams. Each dose distribution was recalculated with the Monte Carlo method, utilizing the same beam geometry and patient-specific computed tomography (CT) images. Treatment plans using the two energies were compared in terms of their isodose distributions and dose-volume histograms (DVHs). The 15 MV dose distributions and DVHs generated by the clinical treatment planning calculations were as good as, or slightly better than, those generated for 6 MV beams. However, the Monte Carlo dose calculation predicted increased penumbra width with increased photon energy resulting in decreased lateral dose homogeneity for the 15 MV plans. Monte Carlo calculations showed that all target coverage indicators were significantly worse for 15 MV than for 6 MV; particularly the portion of the planning target volume (PTV) receiving at least 95% of the prescription dose (V(95)) dropped dramatically for the 15 MV plan in comparison to the 6 MV. Spinal cord and lung doses were clinically equivalent for the two energies. In treatment planning of tumors that abut lung tissue, lower energy (6 MV) photon beams should be preferred over higher energies (15-18 MV) because of the significant loss of lateral dose equilibrium for high-energy beams in the low-density medium. Any gains in radial dose uniformity across steep density gradients for higher energy beams must be weighed carefully against the lateral beam degradation due to penumbra widening. PMID:11818004

  9. ITS Version 4.0: Electron/photon Monte Carlo transport codes

    SciTech Connect

    Halbleib, J.A,; Kensek, R.P.; Seltzer, S.M.

    1995-07-01

    The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.

  10. The photon transport equation for turbid biological media with spatially varying isotropic refractive index.

    PubMed

    Premaratne, Malin; Premaratne, Erosha; Lowery, Arthur

    2005-01-24

    Using the principle of energy conservation and laws of geometrical optics, we derive the photon transport equation for turbid biological media with spatially varying isotropic refractive index. We show that when the refractive index is constant, our result reduces to the standard radiative transfer equation and when the medium is lossless and free of scattering to the well known geometrical optics equations in refractive media. PMID:19488365

  11. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  12. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling.

    PubMed

    Schaefer, C; Jansen, A P J

    2013-02-01

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093

  13. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling

    SciTech Connect

    Schaefer, C.; Jansen, A. P. J.

    2013-02-07

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

  14. Monte Carlo Simulations to Calibrate and Validate Tank Experiments of Macrodispersion of Density-Dependent Transport in Stochastically Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Starke, B.; Koch, M.

    2005-12-01

    To calibrate and validate tank experiments of macrodispersion in density-dependent flow within a stochastically heterogeneous medium performed in a 10m long, 1.2m high and 0.1m wide Plexiglas tank at the University of Kassel over the last few years, numerous Monte Carlo simulations using the SUTRA density-dependent flow and transport model have been performed. Objective of this ongoing long-term study is the analysis of the effects of the stochastic properties of the porous medium on the steady-state macrodispersion, particularly, the transversal dispersion. The tank experiments have been set up to mimic density dependent flow under hydrodynamically stable conditions (horizontally stratified flow, whereby saltwater is injected horizontally into freshwater in the lower half of the tank). Numerous experiments with saltwater concentrations ranging from c_0 = 250 (fresh water) to c_0 =100000 ppm and three inflow velocities of u = 1,4 and 8 m/day each are carried out for three stochastic, anisotropically packed sand structures with different mean K_g, variance ?2, and horizontal and vertical correlation lengths ?_x, ?_z for the permeability variations. For each flow and transport experiment carried out in one tankpack, a large number of Monte Carlo simulations with stochastic realizations taken from the corresponding statistical family (with predefined K_g, ?2, ?_x, ?_z) are simulated under steady-state conditions. From moment analyses and laterals widths of the simulated saltwater plume, variances ?_D2 of lateral dispersion are calculated as a function of horizontal distance x from the tank inlet. Using simple square root regression analysis of ?_D2(x), an expectation value for the transversal dispersivity E(A_T) is then computed which should be representative for the particular medium family and the given flow conditions. One issue of particular interest concerns the number N of Monte Carlo simulations reqired to get an asymptotically stable value E(?_D2) or E(A_T). Although this number depends essentially on the variance ?2 of the heterogeneous medium, increasing with the latter, we find out that N = O(100), i.e. an order of magnitude less than what has been found in previously published Monte Carlo simulations of tracer-type macrodispersion in stochastically heterogeneous media. As for the physics of the macrodispersion process retrieved from both the experiments and the Monte Carlo simulations, we find reasonable agreement that, as expected, deterioriates somewhat as the density contrast and the variance of the permeability distribution of the porpus medium increase. Another aspect that will be discussed in detail is the different degree of sensitivity of the lateral macrodispersion to the various parameters describing the flow and the porous medium.

  15. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids.

    PubMed

    Tattersall, W J; Cocks, D G; Boyle, G J; Buckman, S J; White, R D

    2015-04-01

    We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems. PMID:25974609

  16. Monte Carlo study of coherent scattering effects of low-energy charged particle transport in Percus-Yevick liquids

    NASA Astrophysics Data System (ADS)

    Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.

    2015-04-01

    We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.

  17. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

    2010-01-01

    The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

  18. Monte Carlo Electromagnetic Cross Section Production Method for Low Energy Charged Particle Transport Through Single Molecules 

    E-print Network

    Madsen, Jonathan R

    2013-08-13

    for predicting molecule-specific ionization, excitation, and scattering cross sections in the very low energy regime that can be applied in a condensed history Monte Carlo track-structure code. The present methodology begins with the calculation of a solution...

  19. Use of single scatter electron monte carlo transport for medical radiation sciences

    DOEpatents

    Svatos, Michelle M. (Oakland, CA)

    2001-01-01

    The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

  20. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S.; Harrendorf, Marco A.; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-01

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  1. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors

    NASA Astrophysics Data System (ADS)

    Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy

    2015-07-01

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.

  2. A multi-agent quantum Monte Carlo model for charge transport: Application to organic field-effect transistors.

    PubMed

    Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy

    2015-07-28

    We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114

  3. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting

    NASA Astrophysics Data System (ADS)

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin

    2015-10-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (~2.5 eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350 °C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ~0.2 eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge.

  4. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting

    PubMed Central

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A.; Choi, Kyoung-Shin

    2015-01-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (?2.5?eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350?°C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ?0.2?eV but also increases the majority carrier density and mobility, enhancing electron–hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984

  5. Simultaneous enhancements in photon absorption and charge transport of bismuth vanadate photoanodes for solar water splitting.

    PubMed

    Kim, Tae Woo; Ping, Yuan; Galli, Giulia A; Choi, Kyoung-Shin

    2015-01-01

    n-Type bismuth vanadate has been identified as one of the most promising photoanodes for use in a water-splitting photoelectrochemical cell. The major limitation of BiVO4 is its relatively wide bandgap (?2.5?eV), which fundamentally limits its solar-to-hydrogen conversion efficiency. Here we show that annealing nanoporous bismuth vanadate electrodes at 350?°C under nitrogen flow can result in nitrogen doping and generation of oxygen vacancies. This gentle nitrogen treatment not only effectively reduces the bandgap by ?0.2?eV but also increases the majority carrier density and mobility, enhancing electron-hole separation. The effect of nitrogen incorporation and oxygen vacancies on the electronic band structure and charge transport of bismuth vanadate are systematically elucidated by ab initio calculations. Owing to simultaneous enhancements in photon absorption and charge transport, the applied bias photon-to-current efficiency of nitrogen-treated BiVO4 for solar water splitting exceeds 2%, a record for a single oxide photon absorber, to the best of our knowledge. PMID:26498984

  6. Program EPICP: Electron photon interaction code, photon test module. Version 94.2

    SciTech Connect

    Cullen, D.E.

    1994-09-01

    The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.

  7. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    SciTech Connect

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

  8. A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport

    SciTech Connect

    Gentile, N A; Trahan, T J

    2011-03-22

    We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.

  9. Monte-Carlo-derived insights into dose-kerma-collision kerma inter-relationships for 50?keV-25?MeV photon beams in water, aluminum and copper

    NASA Astrophysics Data System (ADS)

    Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.

    2015-01-01

    The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in ‘sub-equilibrium’ small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance ‘upstream’ from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with ‘exact’ MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values.

  10. Monte Carlo study of the energy response and depth dose water equivalence of the MOSkin radiation dosimeter at clinical kilovoltage photon energies.

    PubMed

    Lian, C P L; Othman, M A R; Cutajar, D; Butson, M; Guatelli, S; Rosenfeld, A B

    2011-06-01

    Skin dose is often the quantity of interest for radiological protection, as the skin is the organ that receives maximum dose during kilovoltage X-ray irradiations. The purpose of this study was to simulate the energy response and the depth dose water equivalence of the MOSkin radiation detector (Centre for Medical Radiation Physics (CMRP), University of Wollongong, Australia), a MOSFET-based radiation sensor with a novel packaging design, at clinical kilovoltage photon energies typically used for superficial/orthovoltage therapy and X-ray CT imaging. Monte Carlo simulations by means of the Geant4 toolkit were employed to investigate the energy response of the CMRP MOSkin dosimeter on the surface of the phantom, and at various depths ranging from 0 to 6 cm in a 30 × 30 × 20 cm water phantom. By varying the thickness of the tissue-equivalent packaging, and by adding thin metallic foils to the existing design, the dose enhancement effect of the MOSkin dosimeter at low photon energies was successfully quantified. For a 5 mm diameter photon source, it was found that the MOSkin was water equivalent to within 3% at shallow depths less than 15 mm. It is recommended that for depths larger than 15 mm, the appropriate depth dose water equivalent correction factors be applied to the MOSkin at the relevant depths if this detector is to be used for depth dose assessments. This study has shown that the Geant4 Monte Carlo toolkit is useful for characterising the surface energy response and depth dose behaviour of the MOSkin. PMID:21559885

  11. Optical photon transport in powdered-phosphor scintillators. Part II. Calculation of single-scattering transport parameters

    SciTech Connect

    Poludniowski, Gavin G.; Evans, Philip M.

    2013-04-15

    Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.

  12. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  13. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition

    SciTech Connect

    Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.

  14. Study of the response of a lithium yttrium borate scintillator based neutron rem counter by Monte Carlo radiation transport simulations

    NASA Astrophysics Data System (ADS)

    Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.

    2015-12-01

    The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.

  15. Observing gas and dust in simulations of star formation with Monte Carlo radiation transport on Voronoi meshes

    E-print Network

    Hubber, D A; Dale, J

    2015-01-01

    Ionising feedback from massive stars dramatically affects the interstellar medium local to star forming regions. Numerical simulations are now starting to include enough complexity to produce morphologies and gas properties that are not too dissimilar from observations. The comparison between the density fields produced by hydrodynamical simulations and observations at given wavelengths relies however on photoionisation/chemistry and radiative transfer calculations. We present here an implementation of Monte Carlo radiation transport through a Voronoi tessellation in the photoionisation and dust radiative transfer code MOCASSIN. We show for the first time a synthetic spectrum and synthetic emission line maps of an hydrodynamical simulation of a molecular cloud affected by massive stellar feedback. We show that the approach on which previous work is based, which remapped hydrodynamical density fields onto Cartesian grids before performing radiative transfer/photoionisation calculations, results in significant ...

  16. Radial quasiballistic transport in time-domain thermoreflectance studied using Monte Carlo simulations

    SciTech Connect

    Ding, D.; Chen, X.; Minnich, A. J.

    2014-04-07

    Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.

  17. A Monte Carlo study on electron and neutron contamination caused by the presence of hip prosthesis in photon mode of a 15 MV Siemens PRIMUS linac.

    PubMed

    Bahreyni Toossi, Mohammad Taghi; Behmadi, Marziyeh; Ghorbani, Mahdi; Gholamhosseinian, Hamid

    2013-01-01

    Several investigators have pointed out that electron and neutron contamination from high-energy photon beams are clinically important. The aim of this study is to assess electron and neutron contamination production by various prostheses in a high-energy photon beam of a medical linac. A 15 MV Siemens PRIMUS linac was simulated by MCNPX Monte Carlo (MC) code and the results of percentage depth dose (PDD) and dose profile values were compared with the measured data. Electron and neutron contaminations were calculated on the beam's central axis for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti hip prostheses through MC simulations. Dose increase factor (DIF) was calculated as the ratio of electron (neutron) dose at a point for 10 × 10 cm² field size in presence of prosthesis to that at the same point in absence of prosthesis. DIF was estimated at different depths in a water phantom. Our MC-calculated PDD and dose profile data are in good agreement with the corresponding measured values. Maximum dose increase factor for electron contamination for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti prostheses were equal to 1.18, 1.16, 1.16, and 1.14, respectively. The corresponding values for neutron contamination were respectively equal to: 184.55, 137.33, 40.66, and 43.17. Titanium-based prostheses are recommended for the orthopedic practice of hip junction replacement. When treatment planning for a patient with hip prosthesis is performed for a high-energy photon beam, attempt should be made to ensure that the prosthesis is not exposed to primary photons. PMID:24036859

  18. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad

    2015-02-01

    Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

  19. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study

    SciTech Connect

    Khledi, Navid; Sardari, Dariush; Arbabi, Azim; Ameri, Ahmad; Mohammadi, Mohammad

    2015-02-24

    Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

  20. SU-E-CAMPUS-I-02: Estimation of the Dosimetric Error Caused by the Voxelization of Hybrid Computational Phantoms Using Triangle Mesh-Based Monte Carlo Transport

    SciTech Connect

    Lee, C; Badal, A

    2014-06-15

    Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.

  1. Branching and path-deviation of positive streamers resulting from statistical photon transport

    NASA Astrophysics Data System (ADS)

    Xiong, Zhongmin; Kushner, Mark J.

    2014-12-01

    The branching and change in direction of propagation (path-deviation) of positive streamers in molecular gases such as air likely require a statistical process which perturbs the head of the streamer and produces an asymmetry in its space charge density. In this paper, the mechanisms for path-deviation and branching of atmospheric pressure positive streamer discharges in dry air are numerically investigated from the viewpoint of statistical photon transport and photoionization. A statistical photon transport model, based on randomly selected emitting angles and mean-free-path for absorption, was developed and embedded into a fluid-based plasma transport model. The hybrid model was applied to simulations of positive streamer coaxial discharges in dry air at atmospheric pressure. The results show that secondary streamers, often spatially isolated, are triggered by the random photoionization and interact with the thin space charge layer (SCL) of the primary streamer. This interaction may be partly responsible for path-deviation and streamer branching. The general process consists of random remote photo-electron production which initiates a back-traveling electron avalanche, collision of this secondary avalanche with the primary streamer and the subsequent perturbation to its SCL. When the SCL is deformed from a symmetric to an asymmetric shape, the streamer can experience an abrupt change in the direction of propagation. If the SCL is sufficiently perturbed and essentially broken, local maxima in the SCL can develop into new streamers, leading to streamer branching. During the propagation of positive streamers, this mechanism can take place repetitively in time and space, thus producing multi-level branching and more than two branches within one level.

  2. High-resolution monte carlo simulation of flow and conservative transport in heterogeneous porous media 1. Methodology and flow results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.

  3. Photonics

    NASA Astrophysics Data System (ADS)

    Hiruma, Teruo

    1993-04-01

    After developing various kinds of photodetectors such as phototubes, photomultiplier tubes, image pick up tubes, solid state photodetectors and a variety of light sources, we also started to develop integrated systems utilizing new detectors or imaging devices. These led us to the technology for a single photon counting imaging and detection of picosecond and femtosecond phenomena. Through those experiences, we gained the understanding that photon is a paste of substances, and yet we know so little about photon. By developing various technology for many fields such as analytical chemistry, high energy physics, medicine, biology, brain science, astronomy, etc., we are beginning to understand that the mind and life are based on the same matter, that is substance. Since humankind has so little knowledge about the substance concerning the mind and life, this makes some confusion on these subjects at this moment. If we explore photonics more deeply, many problems we now have in the world could be solved. By creating new knowledge and technology, I believe we will be able to solve the problems of illness, aging, energy, environment, human capability, and finally, the essential healthiness of the six billion human beings in the world.

  4. Result of Monte-Carlo simulation of electron-photon cascades in lead and layers of lead-scintillator

    NASA Technical Reports Server (NTRS)

    Wasilewski, A.; Krys, E.

    1985-01-01

    Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.

  5. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam

    2015-09-01

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  6. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  7. A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES

    SciTech Connect

    Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu

    2013-11-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  8. Comparison of Two Accelerators for Monte Carlo Radiation Transport Calculations, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p Coprocessor: A Case Study for X-ray CT Imaging Dose Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

    2014-06-01

    Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

  9. Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom

    E-print Network

    Fan, Shanhui

    Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford University-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-level atom, is analyzed

  10. Theory of single-photon transport in a single-mode waveguide. II. Coupling to a whispering-gallery resonator containing a two-level atom

    E-print Network

    Fan, Shanhui

    Theory of single-photon transport in a single-mode waveguide. II. Coupling to a whispering- gallery resonator containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford interacting with a two-level atom. The single-photon transport properties such as the transmission

  11. Sampling function of single-X-ray-photon counting hybrid pixel detectors: combining an analytical approach to Monte-Carlo simulations and Finite-Element-Modeling

    NASA Astrophysics Data System (ADS)

    McGrath, J.; Marchal, J.; Medjoubi, K.

    2013-10-01

    Spectroscopic and imaging performance parameters of hybrid pixel detectors operated in single-X-ray photon-counting mode can be inferred from the dependence of their sampling function (or aperture function) on the detection energy threshold. In a previous paper, it was shown that this dependence could be modelled using a simple analytical method. Measurements were performed on typical synchrotron X-ray detectors and fitted to the analytical formulas in order to obtain detector parameters such as charge-sharing width, energy dispersion and fill-factor at 50% threshold. In the present paper, we use Monte-Carlo (MC) and Finite-Element-Modeling (FEM) software tools to perform a more detailed simulation of image formation processes taking place in photon-counting hybrid pixel detectors of various pixel sizes associated to standard silicon sensor thickness and exposed to 15 keV monochromatic X-rays. We show that the MC/FEM simulation results can be used to produce detector parameters required in the analytical expressions of the sampling function of these detectors.

  12. SU-E-J-09: A Monte Carlo Analysis of the Relationship Between Cherenkov Light Emission and Dose for Electrons, Protons, and X-Ray Photons

    SciTech Connect

    Glaser, A; Zhang, R; Gladstone, D; Pogue, B

    2014-06-01

    Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.

  13. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering

    SciTech Connect

    Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang

    2011-07-15

    We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.

  14. Coupling of a single diamond nanocrystal to a whispering-gallery microcavity: Photon transport benefitting from Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang

    2011-07-01

    We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.

  15. Elucidating the electron transport in semiconductors via Monte Carlo simulations: an inquiry-driven learning path for engineering undergraduates

    NASA Astrophysics Data System (ADS)

    Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio

    2015-09-01

    Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.

  16. The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)

    SciTech Connect

    Rhoades, W.A.; Simpson, D.B.

    1997-10-01

    TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.

  17. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone

    NASA Astrophysics Data System (ADS)

    Lodwick, Camille J.

    This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

  18. A Kinetic Monte Carlo Study of Fullerene Adsorption within a Pc-PBBA Covalent Organic Framework and Implications for Electron Transport.

    PubMed

    Koo, Brian T; Berard, Philip G; Clancy, Paulette

    2015-03-10

    Two-dimensional covalent organic frameworks (COFs), with their predictable assembly into ordered porous crystalline materials, tunable composition, and high charge carrier mobility, offer the possibility of creating ordered bulk heterojunction solar cells given a suitable electron-transporting material to fill the pores. The photoconductive (hole-transporting) properties of many COFs have been reported, including the recent creation of a TT-COF/PCBM solar cell by Dogru et al. Although a prototype device has been fabricated, its poor solar efficiency suggests a potential issue with electron transport caused by the interior packing of the fullerenes. Such packing information is absent and cannot be obtained experimentally. In this paper, we use Kinetic Monte Carlo (KMC) simulations to understand the dominant pore-filling mechanisms and packing configurations of C60 molecules in a Pc-PBBA COF that are similar to the COF fabricated experimentally. The KMC simulations thus offer more realistic filling conditions than our previously used Monte Carlo (MC) techniques. We found persistently large separation distances between C60 molecules that are absent in the more tractable MC simulations and which are likely to hinder electron transport significantly. We attribute the looser fullerene packing to the existence of stable motifs with pairwise distances that are mismatched with the underlying adsorption lattice of the COF. We conclude that larger pore COFs may be necessary to optimize electron transport and hence produce higher efficiency devices. PMID:26579766

  19. MCNP/X TRANSPORT IN THE TABULAR REGIME

    SciTech Connect

    HUGHES, H. GRADY

    2007-01-08

    The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

  20. A Monte Carlo neutron transport code for eigenvalue calculations on a dual-GPU system and CUDA environment

    SciTech Connect

    Liu, T.; Ding, A.; Ji, W.; Xu, X. G.; Carothers, C. D.; Brown, F. B.

    2012-07-01

    Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

  1. Assessment of Parametric Uncertainty using Markov Chain Monte Carlo Methods for Surface Complexation Models in Groundwater Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.

    2010-12-01

    Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.

  2. Transport calculations for a 14.8 MeV neutron beam in a water phantom

    NASA Astrophysics Data System (ADS)

    Goetsch, S. J.

    A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.

  3. Assessment of uncertainties in the lung activity measurement of low-energy photon emitters using Monte Carlo simulation of ICRP male thorax voxel phantom.

    PubMed

    Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S

    2015-12-01

    Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992

  4. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin

    NASA Astrophysics Data System (ADS)

    Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.

    2014-09-01

    Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

  5. Monte Carlo Simulations on Neutron Transport and Absorbed Dose in Tissue-Equivalent Phantoms Exposed to High-Flux Epithermal Neutron Beams

    NASA Astrophysics Data System (ADS)

    Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.

    2010-04-01

    Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.

  6. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  7. Monte Carlo investigation of the increased radiation deposition due to gold nanoparticles using kilovoltage and megavoltage photons in a 3D randomized cell model

    SciTech Connect

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2013-07-15

    Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.

  8. A feasibility study to calculate unshielded fetal doses to pregnant patients in 6-MV photon treatments using Monte Carlo methods and anatomically realistic phantoms

    SciTech Connect

    Bednarz, Bryan; Xu, X. George

    2008-07-15

    A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.

  9. Theory of single-photon transport in a single-mode waveguide coupled to a cavity containing a two-level atom

    E-print Network

    Jung-Tsung Shen; Shanhui Fan

    2009-01-26

    The single-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-leval atom is analyzed. The single-photon transmission and reflection amplitudes, as well as the cavity and the atom excitation amplitudes, are solved exactly via a real-space approach. It is shown that the dissipation of the cavity and of the atom respectively affects distinctively on the transport properties of the photons, and on the relative phase between the excitation amplitudes of the cavity mode and the atom.

  10. Monte Carlo tests of small-world architecture for coarse-grained networks of the United States railroad and highway transportation systems

    NASA Astrophysics Data System (ADS)

    Aldrich, Preston R.; El-Zabet, Jermeen; Hassan, Seerat; Briguglio, Joseph; Aliaj, Enela; Radcliffe, Maria; Mirza, Taha; Comar, Timothy; Nadolski, Jeremy; Huebner, Cynthia D.

    2015-11-01

    Several studies have shown that human transportation networks exhibit small-world structure, meaning they have high local clustering and are easily traversed. However, some have concluded this without statistical evaluations, and others have compared observed structure to globally random rather than planar models. Here, we use Monte Carlo randomizations to test US transportation infrastructure data for small-worldness. Coarse-grained network models were generated from GIS data wherein nodes represent the 3105 contiguous US counties and weighted edges represent the number of highway or railroad links between counties; thus, we focus on linkage topologies and not geodesic distances. We compared railroad and highway transportation networks with a simple planar network based on county edge-sharing, and with networks that were globally randomized and those that were randomized while preserving their planarity. We conclude that terrestrial transportation networks have small-world architecture, as it is classically defined relative to global randomizations. However, this topological structure is sufficiently explained by the planarity of the graphs, and in fact the topological patterns established by the transportation links actually serve to reduce the amount of small-world structure.

  11. Comparison of experimental and Monte-Carlo simulation of MeV particle transport through tapered/straight glass capillaries and circular collimators

    NASA Astrophysics Data System (ADS)

    Hespeels, F.; Tonneau, R.; Ikeda, T.; Lucas, S.

    2015-11-01

    This study compares the capabilities of three different passive collimation devices to produce micrometer-sized beams for proton and alpha particle beams (1.7 MeV and 5.3 MeV respectively): classical platinum TEM-like collimators, straight glass capillaries and tapered glass capillaries. In addition, we developed a Monte-Carlo code, based on the Rutherford scattering theory, which simulates particle transportation through collimating devices. The simulation results match the experimental observations of beam transportation through collimators both in air and vacuum. This research shows the focusing effects of tapered capillaries which clearly enable higher transmission flux. Nevertheless, the capillaries alignment with an incident beam is a prerequisite but is tedious, which makes the TEM collimator the easiest way to produce a 50 ?m microbeam.

  12. Parallel Performance Study of Monte Carlo Photon Transport Code on Shared-, Distributed-, and Distributed-Shared-Memory Architectures

    E-print Network

    Majumdar, Amit

    developed. The first version is for the Tera Multi-Threaded Architecture (MTA) and uses Tera specific architectures targeted are the shared memory Tera MTA, the distributed memory Cray T3E, and the 8-way SMP IBM SP

  13. Effective QCD and transport description of dilepton and photon production in heavy-ion collisions and elementary processes

    E-print Network

    O. Linnyk; E. L. Bratkovskaya; W. Cassing

    2015-12-26

    In this review we address the dynamics of relativistic heavy-ion reactions and in particular the information obtained from electromagnetic probes that stem from the partonic and hadronic phases. The out-of-equilibrium description of strongly interacting relativistic fields is based on the theory of Kadanoff and Baym. For the modeling of the partonic phase we introduce a dynamical quasiparticle model (DQPM) for QCD in equilibrium. The widths and masses of the quasiparticles are controlled by transport coefficients in comparison to lattice QCD results. The resulting off-shell transport approach - denoted by Parton-Hadron-String Dynamics (PHSD) - also includes covariant dynamical hadronization and keeps track of the hadronic interactions in the final phase. We show that PHSD captures the bulk dynamics of heavy-ion collisions from SPS to LHC energies and provides a basis for the evaluation of the electromagnetic emissivity, using the same dynamical parton propagators as for the system evolution. Direct photon production in elementary processes and heavy-ion reactions at RHIC and LHC energies is investigated and the status of the photon v2 puzzle - a large elliptic flow of the direct photons observed in A+A collisions - is addressed. We discuss the roles of hadronic and partonic sources for the photon spectra and the flow coefficients v2 and v3 and also the possibility to subtract the QGP signal from observables. Furthermore, the production of dilepton pairs is addressed from SIS to LHC energies. The low-mass dilepton yield is enhanced due to the in-medium modification of the rho-meson and at the lowest energy also due to a multiple regeneration of Delta-resonances. In addition, a signal of the partonic degrees-of-freedom is found in the intermediate dilepton mass regime (1.2GeVnature of the very early degrees-of-freedom in nucleus-nucleus collisions.

  14. Efficient and accurate computation of non-negative anisotropic group scattering cross sections for discrete ordinates and Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Gerts, David Walter

    A new method for approximating anisotropic, multi-group scatter cross sections for use in discretized and Monte Carlo multi-group neutron transport is presented. The new method eliminates unphysical artifacts such as negative group scatter cross sections and falsely positive cross sections. Additionally, when combined with the discrete elements angular quadrature method, the new cross sections eliminate the lack of angular support in the discrete ordinates quadrature method. The new method generates piecewise-average group-to-group scatter cross sections. The accuracy and efficiency for calculating the discrete elements cross sections has improved by many orders of magnitude compared to DelGrande and Mathews previous implementation. The new cross sections have extended the discrete elements method to all neutron-producing representations in the Evaluated Nuclear Data Files. The new cross section method has been validated and tested with the cross section generation code, NJOY. Results of transport calculations using discrete elements, discrete ordinates, and Monte Carlo methods for two, one-dimensional slab geometry problems are compared.

  15. High-energy photon transport modeling for oil-well logging

    E-print Network

    Johnson, Erik D., Ph. D. Massachusetts Institute of Technology

    2009-01-01

    Nuclear oil well logging tools utilizing radioisotope sources of photons are used ubiquitously in oilfields throughout the world. Because of safety and security concerns, there is renewed interest in shifting to ...

  16. Study on the photoelectric hot electrons generation and transport with metallic-semiconductor photonic crystals

    E-print Network

    Wang, Yu

    2015-01-01

    Photoelectric hot carrier generation in metal-semiconductor junctions allows for optical-to- electrical energy conversion at photon energies below the bandgap of the semiconductor. Which opens new opportunities in optical ...

  17. An Electron/Photon/Relaxation Data Library for MCNP6

    SciTech Connect

    Hughes, III, H. Grady

    2015-08-07

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  18. Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se

    SciTech Connect

    Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo

    2012-01-15

    Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.

  19. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  20. Calculs Monte Carlo en transport d'energie pour le calcul de la dose en radiotherapie sur plateforme graphique hautement parallele

    NASA Astrophysics Data System (ADS)

    Hissoiny, Sami

    Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.

  1. Effect of burst and recombination models for Monte Carlo transport of interacting carriers in a-Se x-ray detectors on Swank noise

    SciTech Connect

    Fang, Yuan; Karim, Karim S.; Badano, Aldo

    2014-01-15

    Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.

  2. Photon and dilepton production at FAIR and RHIC-BES energies using coarse-grained microscopic transport simulations

    E-print Network

    Stephan Endres; Hendrik van Hees; Marcus Bleicher

    2015-12-21

    We present calculations of dilepton and photon spectra for the energy range $E_{\\text{lab}}=2-35$ $A$GeV which will be available for the Compressed Baryonic Matter (CBM) experiment at the future Facility for Proton and Anti-Proton Research (FAIR). The same energy regime will also be covered by phase II of the Beam Energy Scan at the Relativistic Heavy-Ion Collider (RHIC-BES). Coarse-grained dynamics from microscopic transport calculations of the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) model is used to determine temperature and chemical potentials, which allows for the use of dilepton and photon-emission rates from equilibrium quantum-field theory calculations. The results indicate that non-equilibrium effects, the presence of baryonic matter and the creation of a deconfined phase might show up in specific manners in the measurable dilepton invariant mass spectra and in the photon transverse momentum spectra. However, as the many influences are difficult to disentangle, we argue that the challenge for future measurements of electromagnetic probes will be to provide a high precision with uncertainties much lower than in previous experiments. Furthermore, a systematic study of the whole energy range covered by FAIR and RHIC-BES is necessary to discriminate between different effects, which influence the spectra, and to identify possible signatures of a phase transition.

  3. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    SciTech Connect

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  4. Two-Dimensional Radiation Transport in Cylindrical Geometry: Ray-Tracing Compared to Monte Carlo Solutions for a Two-Level Atom

    NASA Astrophysics Data System (ADS)

    Apruzese, J. P.; Giuliani, J. L.

    2008-11-01

    Radiation plays a critical role in the dynamics of Z-pinch implosions. Modeling of Z-pinch experiments therefore needs to include an accurate but efficient algorithm for photon transport. Such algorithms exist for the one-dimensional (1D) approximation. In the present work, we report progress toward this goal in a 2D (r,z) geometry, intended for use in radiation hydrodynamics calculations of dynamically evolving Z pinches. We have tested a radiation transport algorithm that uses discrete ordinate sets for the ray in 3-space, and the multifrequency integral solution along each ray. The published solutions of Avery et al. [1] for the line source functions are used as a benchmark to ensure the accuracy of our approach. We discuss the coupling between the radiation field and kinetics that results in large departures from LTE, ruling out use of the diffusion approximation. [1] L. W. Avery, L. L. House, and A. Skumanich, JQSRT 9, 519 (1969).

  5. The effect of biological shielding on fast neutron and photon transport in the VVER-1000 mock-up model placed in the LR-0 reactor.

    PubMed

    Koš?ál, Michal; Cvachovec, František; Mil?ák, Ján; Mravec, Filip

    2013-05-01

    The paper is intended to show the effect of a biological shielding simulator on fast neutron and photon transport in its vicinity. The fast neutron and photon fluxes were measured by means of scintillation spectroscopy using a 45×45 mm(2) and a 10×10 mm(2) cylindrical stilbene detector. The neutron spectrum was measured in the range of 0.6-10 MeV and the photon spectrum in 0.2-9 MeV. The results of the experiment are compared with calculations. The calculations were performed with various nuclear data libraries. PMID:23434890

  6. Ensemble Monte Carlo analysis of subpicosecond transient electron transport in cubic and hexagonal silicon carbide for high power SiC-MESFET devices

    NASA Astrophysics Data System (ADS)

    Belhadji, Youcef; Bouazza, Benyounes; Moulahcene, Fateh; Massoum, Nordine

    2015-05-01

    In a comparative framework, an ensemble Monte Carlo was used to elaborate the electron transport characteristics in two different silicon carbide (SiC) polytypes 3C-SiC and 4H-SiC. The simulation was performed using three-valley band structure model. These valleys are spherical and nonparabolic. The aim of this work is to forward the trajectory of 20,000 electrons under high-flied (from 50 kV to 600 kV) and high-temperature (from 200 K to 700 K). We note that this model has already been used in other studies of many Zincblende or Wurtzite semiconductors. The obtained results, compared with results found in many previous studies, show a notable drift velocity overshoot. This last appears in subpicoseconds transient regime and this overshoot is directly attached to the applied electric field and lattice temperature.

  7. Weak second-order splitting schemes for Lagrangian Monte Carlo particle methods for the composition PDF/FDF transport equations

    SciTech Connect

    Wang Haifeng Popov, Pavel P.; Pope, Stephen B.

    2010-03-01

    We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.

  8. C^2-Ray: A new method for photon-conserving transport of ionizing radiation

    E-print Network

    Garrelt Mellema; Ilian T. Iliev; Marcelo A. Alvarez; Paul R. Shapiro

    2005-09-29

    We present a new numerical method for calculating the transfer of ionizing radiation, called C^2-Ray=Conservative, Causal Ray-tracing method. The method is explicitly photon-conserving, so the depletion of ionizing photons by bound-free opacity is guaranteed to equal the photoionizations these photons caused. As a result, grid cells can be large and very optically-thick without loss of accuracy. The method also uses an analytical relaxation solution for the ionization rate equations for each time step which can accommodate time steps which greatly exceed the characteristic ionization and ionization front crossing times. Together, these features make it possible to integrate the equation of transfer along a ray with many fewer cells and time steps than previous methods. For multi-dimensional calculations, the code utilizes short-characteristics ray tracing. C^2-Ray is well-suited for coupling radiative transfer to gas and N-body dynamics methods, on both fixed and adaptive grids, without imposing additional limitations on the time step and grid spacing. We present several tests of the code involving propagation of ionization fronts in one and three dimensions, in both homogeneous and inhomogeneous density fields. We compare to analytical solutions for the ionization front position and velocity, some of which we derive here for the first time.

  9. The role of plasma evolution and photon transport in optimizing future advanced lithography sources

    SciTech Connect

    Sizyuk, Tatyana; Hassanein, Ahmed

    2013-08-28

    Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.

  10. PHOTONIC NANOJET PHOTONIC NANOJET

    E-print Network

    Poon, Andrew Wing On

    PHOTONIC NANOJET SCANNING MICROSCOPY PHOTONIC NANOJET SCANNING MICROSCOPY Project Members: LEE Yi Final Year Project (2004 ­ 2005) #12;OVERVIEW Photonic nanojet Photonic nanojet measurement Conventional Photonic Nanojet Scanning MicroscopePhotonic Nanojet Scanning Microscope AFM tip scanning AFM tip scanning

  11. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  12. Controlling the transport of single photons by tuning the frequency of either one or two cavities in an array of coupled cavities

    E-print Network

    Jie-Qiao Liao; Z. R. Gong; Lan Zhou; Yu-xi Liu; C. P. Sun; Franco Nori

    2010-04-14

    We theoretically study how to control transport, bound states, and resonant states of a single photon in a one-dimensional coupled-cavity array. We find that the transport of a single photon in the cavity array can be controlled by tuning the frequency of either one or two cavities. If one of the cavities in the array has a tunable frequency, and its frequency is tuned to be larger (or smaller) than those of other cavities, then there is a photon bound state above (or below) the energy band of the coupled cavity array. However, if two cavities in the array have tunable frequencies, then there exist both bound states and resonant states. When the frequencies of the two cavities are chosen to be much larger than those of other cavities and the hopping couplings between any two nearest-neighbor cavities are weak, a single photon with a resonant wave vector can be trapped in the region between the two frequency-tunable cavities. In this case, a quantum supercavity can be formed by these two frequency-tunable cavities. We also study how to apply this photon transport control to an array of coupled superconducting transmission line resonators.

  13. Thermal photon and dilepton production and electric charge transport in a baryon rich strongly coupled QGP from holography

    E-print Network

    Stefano Ivo Finazzo; Romulo Rougemont

    2015-10-12

    We obtain the thermal photon and dilepton production rates in a strongly coupled quark-gluon plasma (QGP) at both zero and nonzero baryon chemical potential using a bottom-up Einstein-Maxwell-Dilaton (EMD) holographic model that is in good quantitative agreement with the thermodynamics of $(2+1)$-flavor lattice QCD around the crossover transition for baryon chemical potentials up to 400 MeV, which may be reached in the beam energy scan (BES) at RHIC. We find that increasing the temperature $T$ and the baryon chemical potential $\\mu_B$ enhances the peak present in both spectra. We also obtain the electric charge susceptibility, the DC and AC electric conductivities and the electric charge diffusion as functions of $T$ and $\\mu_B$. We find that electric diffusive transport is suppressed as one increases $\\mu_B$. At zero baryon density, we compare our results for the DC electric conductivity and the electric charge diffusion with the latest lattice data available for these observables and find reasonable agreement around the crossover transition. Therefore, our holographic results may be used to constraint the magnitude of the thermal photon and dilepton production rates in a strongly coupled QGP, which we found to be at least one order of magnitude below perturbative estimates.

  14. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  15. Monte Carlo study of electron spectra and dose from backscattered radiation in the vicinity of media interfaces for monoenergetic photons of 50-1250 keV

    SciTech Connect

    Verhaegen, F.; Seuntjens, J.

    1995-09-01

    In the present paper electron fluence spectra and backscatter dose factors for monoenergetic photons (50, 100, 250 and 1250 keV) are presented. The influence of the atomic number of the backscatter materials glass, bone, steel and titanium is studied. For all radiations increases in dose were found in the immediate vicinity of the interface in a region extending over a distance determined by the photon energy. In a thin layer extending several tens of micrometers from the interface, a large increase in dose was found for 50 and 100 keV photons. The largest backscatter dose factor (3.5) was found for 50 keV photons and a steel interface. This large effect was found to be due largely to backscattered photoelectrons. K-shell fluorescent photons from titanium and steel were also included but were found to have almost no effect on backscatter dose factors or electron spectra. With both K-shell fluorescence and sampling of the photoelectron angular distribution switched off, much smaller backscatter factors were obtained. Beyond the thin region near the interface where the dose is increased, significant dose depressions were found for photons of these energies in a region extending several centimeters from the interface. The effect was maximum of 50 keV photons, for which dose depressions of about 35% for steel and titanium and 15% for glass and bone were obtained. For 250 and 1250 keV photons a more modest dose enhancement was found close to the interface (40-50%) but extending over a larger region (e.g. 5 mm for 1250 keV photons). Small differences in radiation quality nearer to the interface were found as expressed by track-averaged and dose-averaged restricted linear energy transfer, {rvec L}ET{sub 1000,T} and {rvec L}ET{sub 100,D}, with a maximum effect for the 100 keV photons. For this radiation quality a decrease of 50% in {rvec L}ET{sub 100,T} was found close to the interface. 29 refs., 4 figs.

  16. Monte Carlo study of electron spectra and dose from backscattered radiation in the vicinity of media interfaces for monoenergetic photons of 50-1250 keV.

    PubMed

    Verhaegen, F; Seuntjens, J

    1995-09-01

    In the present paper electron fluence spectra and backscatter dose factors for monoenergetic photons (50, 100, 250 and 1250 keV) are presented. The influence of the atomic number of the backscatter materials glass, bone, steel and titanium is studied. For all radiations increases in dose were found in the immediate vicinity of the interface in a region extending over a distance determined by the photon energy. In a thin layer extending several tens of micrometers from the interface, a large increase in dose was found for 50 and 100 keV photons. The largest backscatter dose factor (3.5) was found for 50 keV photons and a steel interface. This large effect was found to be due largely to backscattered photoelectrons. K-shell fluorescent photons from titanium and steel were also included but were found to have almost no effect on backscatter dose factors or electron spectra. With both K-shell fluorescence and sampling of the photoelectron angular distribution switched off, much smaller backscatter factors were obtained. Beyond the thin region near the interface where the dose is increased, significant dose depressions were found for photons of these energies in a region extending several centimeters from the interface. The effect was maximum for 50 keV photons, for which dose depressions of about 35% for steel and titanium and 15% for glass and bone were obtained. For 250 and 1250 keV photons a more modest dose enhancement was found close to the interface (40-50%) but extending over a larger region (e.g. 5 mm for 1250 keV photons). Small differences in radiation quality nearer to the interface were found as expressed by track-averaged and dose-averaged restricted linear energy transfer, LET100,T and LET100,D, with a maximum effect for the 100 keV photons. For this radiation quality a decrease of 50% in LET100,T was found close to the interface. PMID:7652173

  17. System and method for radiation dose calculation within sub-volumes of a monte carlo based particle transport grid

    DOEpatents

    Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)

    2000-01-01

    A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

  18. MCNP{trademark} Monte Carlo: A precis of MCNP

    SciTech Connect

    Adams, K.J.

    1996-06-01

    MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

  19. Dopamine Transporter Single-Photon Emission Computerized Tomography Supports Diagnosis of Akinetic Crisis of Parkinsonism and of Neuroleptic Malignant Syndrome

    PubMed Central

    Martino, G.; Capasso, M.; Nasuti, M.; Bonanni, L.; Onofrj, M.; Thomas, A.

    2015-01-01

    Abstract Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the “burst striatum” pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755

  20. A study of the dosimetry of small field photon beams used in intensity-modulated radiation therapy in inhomogeneous media: Monte Carlo simulations and algorithm comparisons and corrections

    NASA Astrophysics Data System (ADS)

    Jones, Andrew Osler

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung and how modern dose calculation algorithms compare to Monte Carlo data is presented in this research project. This work can be used as a basis to further refine an algorithm's accuracy in low-density media or to correct prior dosimetric results.

  1. Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code

    SciTech Connect

    Rhoades, W.A.; Childs, R.L.

    1982-07-01

    DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.

  2. Monte Carlo simulations of plutonium gamma-ray spectra

    SciTech Connect

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-07-16

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum.

  3. Theoretical and experimental investigations of asymmetric light transport in graded index photonic crystal waveguides

    SciTech Connect

    Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.

    2014-01-20

    To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6?nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1?nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3?GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.

  4. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. PMID:24162375

  5. Monte Carlo transport model comparison with 1A GeV accelerated iron experiment: heavy-ion shielding evaluation of NASA space flight-crew foodstuff

    NASA Astrophysics Data System (ADS)

    Stephens, D. L.; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.

    Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71 st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R 2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R 2 respectively.

  6. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods: A Case Study at the South Oyster Bacterial Transport Site in Virginia

    SciTech Connect

    Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.

    2004-12-22

    The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.

  7. A user's guide to MICAP: A Monte Carlo Ionization Chamber Analysis Package

    SciTech Connect

    Johnson, J.O.; Gabriel, T.A.

    1988-01-01

    A collection of computer codes entitled MICAP - A Monte Carlo Ionization Chamber Analysis Package has been developed to determine the response of a gas-filled cavity ionization chamber in a mixed neutron and photon radiation environment. In particular, MICAP determines the neutron, photon, and total response of the ionization chamber. The applicability of MICAP encompasses all aspects of mixed field dosimetry analysis including detector design, preexperimental planning and post-experimental analysis. The MICAP codes include: RDNDF for reading and processing ENDF/B-formatted cross section files, MICRO for manipulating microscopic cross section data sets, MACRO for creating macroscopic cross section data sets, NEUTRON for transporting neutrons, RECOMB for calculating correction data due to ionization chamber saturation effects, HEAVY for transporting recoil heavy ions and charged particles, PECSP for generating photon and electron cross section and material data sets, PHOTPREP for generating photon source input tapes, and PHOTON for transporting photons and electrons. The codes are generally tailored to provide numerous input options, but whenever possible, default values are supplied which yield adequate results. All of the MICAP codes function independently, and are operational on the ORNL IBM 3033 computer system. 14 refs., 27 figs., 49 tabs.

  8. Breaking symmetries in ordered materials : spin polarized light transport in magnetized noncentrosymmetric 1D photonic crystals, and photonic gaps and fabrication of quasiperiodic structured materials from interference lithography

    E-print Network

    Bita, Ion

    2006-01-01

    Effects of breaking various symmetries on optical properties in ordered materials have been studied. Photonic crystals lacking space-inversion and time-reversal symmetries were shown to display nonreciprocal dispersion ...

  9. Photon Maps Photon Tracing

    E-print Network

    Lischinski, Dani

    Photon Maps Photon Tracing Simulating light propagation by shooting photons from the light sources. Photon Tracing Storing the incidences of photon's path. Implementing surface properties statistically. Russian Roulette. Photon Tracing Photon maps keep: Incidence point (in 3D). The normal at that point

  10. Retinoblastoma external beam photon irradiation with a special ‘D’-shaped collimator: a comparison between measurements, Monte Carlo simulation and a treatment planning system calculation

    NASA Astrophysics Data System (ADS)

    Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.

    2012-11-01

    Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.

  11. Single-photon emission tomography imaging of serotonin transporters in the nonhuman primate brain with [(123)I]ODAM.

    PubMed

    Acton, P D; Mu, M; Plössl, K; Hou, C; Siciliano, M; Zhuang, Z P; Oya, S; Choi, S R; Kung, H F

    1999-10-01

    We have described previously a selective serotonin transporter (SERT) radioligand, [(123)I]IDAM. We now report a similarly potent, but more stable IDAM derivative, 5-iodo-2-[2-[(dimethylamino)methyl]phenoxy]benzyl alcohol ([(123)I]ODAM). The imaging characteristics of this radioligand were studied and compared against [(123)I]IDAM. Dynamic sequences of single-photon emission tomography (SPET) scans were obtained on three female baboons after injection of 375 MBq of [(123)I]ODAM. Displacing doses (1 mg/kg) of the selective SERT ligand (+)McN5652 were administered 120 min after injection of [(123)I]ODAM. Total integrated brain uptake of [(123)I]ODAM was about 30% higher than [(123)I]IDAM. After 60-120 min, the regional distribution of tracer within the brain reflected the characteristic distribution of SERT. Peak specific binding in the midbrain occurred 120 min after injection, with an equilibrium midbrain to cerebellar ratio of 1. 50+/-0.08, which was slightly lower than the value for [(123)I]IDAM (1.80+/- 0.13). Both the binding kinetics and the metabolism of [(123)I]ODAM were slower than those of [(123)I]IDAM. Following injection of a competing SERT ligand, (+)McN5652, the tracer exhibited washout from areas with high concentrations of SERT, with a dissociation kinetic rate constant k(off)=0.0085+/-0.0028 min(-1) in the midbrain. Similar studies using nisoxetine and methylphenidate showed no displacement, consistent with its low binding affinity to norepinephrine and dopamine transporters, respectively. These results suggest that [(123)I]ODAM is suitable for selective SPET imaging of SERT in the primate brain, with higher uptake and slower kinetics and metabolism than [(123)I]IDAM, but also a slightly lower selectivity for SERT. PMID:10541838

  12. From Simple Plane-Parallel to Complex Monte Carlo Calculations of Solar Fluxes and Radiances for Cloudy Atmospheres

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander

    2004-01-01

    In my presentation, I will describe several approximation methods with different level of complexity; they will be gradually applied to simple examples of horizontally inhomogeneous clouds. Understanding of photon horizontal transport and radiative smoothing can help to improve accuracy of the methods The accuracy of the methods will be compared with the full Monte Carlo calculations. The specifics of Monte Carlo in cloudy atmospheres will be also discussed. A special emphasis will be put on the strong forward scattering peak in the phase functions.

  13. The FERMI@Elettra free-electron-laser source for coherent X-ray physics: photon properties, beam transport system, and applications

    SciTech Connect

    Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio

    2010-04-05

    FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.

  14. Thermal photon and dilepton production and electric charge transport in a baryon rich strongly coupled QGP from holography

    E-print Network

    Finazzo, Stefano Ivo

    2015-01-01

    We obtain the thermal photon and dilepton production rates in a strongly coupled quark-gluon plasma (QGP) at both zero and nonzero baryon chemical potential using a bottom-up Einstein-Maxwell-Dilaton (EMD) holographic model that is in good quantitative agreement with the thermodynamics of $(2+1)$-flavor lattice QCD around the crossover transition for baryon chemical potentials up to 400 MeV, which may be reached in the beam energy scan (BES) at RHIC. We find that increasing the temperature $T$ and the baryon chemical potential $\\mu_B$ enhances the peak present in both spectra. We also obtain the electric charge susceptibility, the DC and AC electric conductivities and the electric charge diffusion as functions of $T$ and $\\mu_B$. We find that electric diffusive transport is suppressed as one increases $\\mu_B$. At zero baryon density, we compare our results for the DC electric conductivity and the electric charge diffusion with the latest lattice data available for these observables and find reasonable agreeme...

  15. The association between heroin expenditure and dopamine transporter availability--a single-photon emission computed tomography study.

    PubMed

    Lin, Shih-Hsien; Chen, Kao Chin; Lee, Sheng-Yu; Chiu, Nan Tsing; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Chen, Chia-Chieh; Liao, Mei-Hsiu; Yang, Yen Kuang

    2015-03-30

    One of the consequences of heroin dependency is a huge expenditure on drugs. This underlying economic expense may be a grave burden for heroin users and may lead to criminal behavior, which is a huge cost to society. The neuropsychological mechanism related to heroin purchase remains unclear. Based on recent findings and the established dopamine hypothesis of addiction, we speculated that expenditure on heroin and central dopamine activity may be associated. A total of 21 heroin users were enrolled in this study. The annual expenditure on heroin was assessed, and the availability of the dopamine transporter (DAT) was assessed by single-photon emission computed tomography (SPECT) using [(99m)TC]TRODAT-1. Parametric and nonparametric correlation analyses indicated that annual expenditure on heroin was significantly and negatively correlated with the availability of striatal DAT. After adjustment for potential confounders, the predictive power of DAT availability was significant. Striatal dopamine function may be associated with opioid purchasing behavior among heroin users, and the cycle of spiraling dysfunction in the dopamine reward system could play a role in this association. PMID:25659472

  16. Monte Carlo Simulation in the Optimization of a Free-Air Ionization Chamber for Dosimetric Control in Medical Digital Radiography

    SciTech Connect

    Leyva, A.; Pinera, I.; Abreu, Y.; Cruz, C. M.; Montano, L. M.

    2008-08-11

    During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.

  17. The Monte Carlo code MCSHAPE: Main features and recent developments

    NASA Astrophysics Data System (ADS)

    Scot, Viviana; Fernandez, Jorge E.

    2015-06-01

    MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.

  18. Diffractive production of isolated photons at HERA

    E-print Network

    Peter Bussey; for the ZEUS Collaboration

    2015-07-14

    The ZEUS detector at HERA has been used to measure the photoproduction of isolated photons in diffractive events. Cross sections are evaluated in the photon transverse-energy and pseudorapidity ranges 5 photon energy and of the colourless exchange ("Pomeron") energy that are imparted to a photon-jet final state. Comparison is made to predictions from the RAPGAP Monte Carlo simulation.

  19. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  20. Dosimetric accuracy of a deterministic radiation transport based {sup 192}Ir brachytherapy treatment planning system. Part III. Comparison to Monte Carlo simulation in voxelized anatomical computational models

    SciTech Connect

    Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.

    2013-01-15

    Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.

  1. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    SciTech Connect

    Wan Chan Tseung, H; Ma, J; Beltran, C

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam treatment plans. Hardware for such a system will cost under $5000. This work was funded in part by a grant from Varian Medical Systems, Inc.

  2. Characterization of [(123)I]IDAM as a novel single-photon emission tomography tracer for serotonin transporters.

    PubMed

    Kung, M P; Hou, C; Oya, S; Mu, M; Acton, P D; Kung, H F

    1999-08-01

    Development of selective serotonin transporter (SERT) tracers for single-photon emission tomography (SPET) is important for studying the underlying pharmacology and interaction of specific serotonin reuptake site inhibitors, commonly used antidepressants, at the SERT sites in the human brain. In search of a new tracer for imaging SERT, IDAM (5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio]benzyl alcohol) was developed. In vitro characterization of IDAM was carried out with binding studies in cell lines and rat tissue homogenates. In vivo binding of [(125)I]IDAM was evaluated in rats by comparing the uptakes in different brain regions through tissue dissections and ex vivo autoradiography. In vitro binding study showed that IDAM displayed an excellent affinity to SERT sites (K(i)=0.097 nM, using membrane preparations of LLC-PK(1) cells expressing the specific transporter) and showed more than 1000-fold of selectivity for SERT over norepinehrine and dopamine (expressed in the same LLC-PK(1) cells). Scatchard analysis of [(125)I]IDAM binding to frontal cortical membrane homogenates prepared from control or p-chloroamphetamine (PCA)-treated rats was evaluated. As expected, the control membranes showed a K(d) value of 0.25 nM+/-0.05 nM and a B(max) value of 272+/-30 fmol/ mg protein, while the PCA-lesioned membranes displayed a similar K(d), but with a reduced B(max) (20+/-7 fmol/ mg protein). Biodistribution of [(125)I]IDAM (partition coefficient =473; 1-octanol/buffer) in the rat brain showed a high initial uptake (2.44%dose at 2 min after i.v. injection) with the specific binding peaked at 60 min postinjection (hypothalamus-cerebellum/cerebellum =1.75). Ex vivo autoradiographs of rat brain sections (60 min after i.v. injection of [(125)I]IDAM) showed intense labeling in several regions (olfactory tubercle, lateral septal nucleus, hypothalamic and thalamic nuclei, globus pallidus, central gray, superior colliculus, substantia nigra, interpeduncular nucleus, dorsal and median raphes and locus coeruleus), which parallel known SERT density. This novel tracer has excellent characteristics for in vivo SPET imaging of SERT in the brain. PMID:10436197

  3. Monte Carlo applications at Hanford Engineering Development Laboratory

    SciTech Connect

    Carter, L.L.; Morford, R.J.; Wilcox, A.D.

    1980-03-01

    Twenty applications of neutron and photon transport with Monte Carlo have been described to give an overview of the current effort at HEDL. A satisfaction factor was defined which quantitatively assigns an overall return for each calculation relative to the investment in machine time and expenditure of manpower. Low satisfaction factors are frequently encountered in the calculations. Usually this is due to limitations in execution rates of present day computers, but sometimes a low satisfaction factor is due to computer code limitations, calendar time constraints, or inadequacy of the nuclear data base. Present day computer codes have taken some of the burden off of the user. Nevertheless, it is highly desirable for the engineer using the computer code to have an understanding of particle transport including some intuition for the problems being solved, to understand the construction of sources for the random walk, to understand the interpretation of tallies made by the code, and to have a basic understanding of elementary biasing techniques.

  4. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    PubMed Central

    Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  5. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media.

    PubMed

    Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A

    2015-01-01

    To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553

  6. Comparison of pencil-beam, collapsed-cone and Monte-Carlo algorithms in radiotherapy treatment planning for 6-MV photons

    NASA Astrophysics Data System (ADS)

    Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho

    2015-07-01

    Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.

  7. ARCHERRT – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    PubMed Central

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George

    2014-01-01

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms. PMID:24989378

  8. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%?2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10?×?10 cm2 fields (over 26% passed) and in the bone region for 5?×?5 and 10?×?10 cm2 fields (over 64% passed). With the criterion relaxed to 5%?2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5?×?2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802

  9. TH-E-BRE-01: A 3D Solver of Linear Boltzmann Transport Equation Based On a New Angular Discretization Method with Positivity for Photon Dose Calculation Benchmarked with Geant4

    SciTech Connect

    Hong, X; Gao, H

    2014-06-15

    Purpose: The Linear Boltzmann Transport Equation (LBTE) solved through statistical Monte Carlo (MC) method provides the accurate dose calculation in radiotherapy. This work is to investigate the alternative way for accurately solving LBTE using deterministic numerical method due to its possible advantage in computational speed from MC. Methods: Instead of using traditional spherical harmonics to approximate angular scattering kernel, our deterministic numerical method directly computes angular scattering weights, based on a new angular discretization method that utilizes linear finite element method on the local triangulation of unit angular sphere. As a Result, our angular discretization method has the unique advantage in positivity, i.e., to maintain all scattering weights nonnegative all the time, which is physically correct. Moreover, our method is local in angular space, and therefore handles the anisotropic scattering well, such as the forward-peaking scattering. To be compatible with image-guided radiotherapy, the spatial variables are discretized on the structured grid with the standard diamond scheme. After discretization, the improved sourceiteration method is utilized for solving the linear system without saving the linear system to memory. The accuracy of our 3D solver is validated using analytic solutions and benchmarked with Geant4, a popular MC solver. Results: The differences between Geant4 solutions and our solutions were less than 1.5% for various testing cases that mimic the practical cases. More details are available in the supporting document. Conclusion: We have developed a 3D LBTE solver based on a new angular discretization method that guarantees the positivity of scattering weights for physical correctness, and it has been benchmarked with Geant4 for photon dose calculation.

  10. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  11. Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code

    SciTech Connect

    Trumbull, T. H.

    2013-07-01

    With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)

  12. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  13. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output.

    PubMed

    Hunter, William C J; Barrett, Harrison H; Lewellen, Thomas K; Miyaoka, Robert S; Muzi, John P; Li, Xiaoli; McDougald, Wendy; Macdonald, Lawrence R

    2010-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  14. Nuclear spectroscopy for in situ soil elemental analysis: Monte Carlo simulations

    SciTech Connect

    Wielopolski L.; Doron, O.

    2012-07-01

    We developed a model to simulate a novel inelastic neutron scattering (INS) system for in situ non-destructive analysis of soil using standard Monte Carlo Neutron Photon (MCNP5a) transport code. The volumes from which 90%, 95%, and 99% of the total signal are detected were estimated to be 0.23 m{sup 3}, 0.37 m{sup 3}, and 0.79 m{sup 3}, respectively. Similarly, we assessed the instrument's sampling footprint and depths. In addition we discuss the impact of the carbon's depth distribution on sampled depth.

  15. COUPLED MULTI-GROUP NEUTRON PHOTON TRANSPORT FOR THE SIMULATION OF HIGH-RESOLUTION GAMMA-RAY SPECTROSCOPY APPLICATIONS

    SciTech Connect

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  16. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  17. CTRANS: A Monte Carlo program for radiative transfer in plane parallel atmospheres with imbedded finite clouds: Development, testing and user's guide

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.

  18. Decoupled fluorescence Monte Carlo model for direct computation of fluorescence in turbid media.

    PubMed

    Luo, Zhaoyang; Deng, Yong; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Luo, Qingming

    2015-02-01

    We present a decoupled fluorescence Monte Carlo (dfMC) model for the direct computation of the fluorescence in turbid media. By decoupling the excitation-to-emission conversion and transport process of the fluorescence from the path probability density function and associating the corresponding parameters involving the fluorescence process with the weight function, the dfMC model employs the path histories of the excitation photons and the corresponding new weight function to directly calculate the fluorescence. We verify the model’s accuracy using phantom experiments and compare it with that of the perturbation fluorescence Monte Carlo model. The results indicate that the model is accurate for the direct fluorescence calculation and, thus, has great potential for application in fluorescence-based in vivo tomography. PMID:25649626

  19. Fricke Gel Dosimeter Tissue-Equivalence a Monte Carlo Study

    NASA Astrophysics Data System (ADS)

    Valente, M.; Bartesaghi, G.; Gambarini, G.; Brusa, D.; Castellano, G.; Carrara, M.

    2008-06-01

    Gel dosimetry has proved to be a valuable technique for absorbed dose distribution measurements in radiotherapy. FriXy-gel dosimeters consist of Fricke (ferrous sulphate) solution infused with xylenol orange. The solution is incorporated to a gel matrix in order to fix it to a solid structure allowing good spatial resolution and is imaged with a transportable optical system, measuring visible light transmittance before and after irradiation. This paper presents an evaluation of total photon mass attenuation coefficients at energies in the range of 50 keV-10MeV for the radiochromic FriXy gel dosimeter sensitive material. Mass attenuation coefficient estimations have been performed by means of Monte Carlo (PENELOPE) simulations. These calculations have been carried out for the FriXy gel sensitive material as well as for soft tissue (ICRU) and pure liquid water; a comparison of the obtained data shows good agreement between the different materials.

  20. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as the energy of the radiation source and the underlying photon cross sections as well as the I-value of media involved in the simulation. The combined standard uncertainty of the Monte Carlo calculation yields 0.78% as a conservative estimation. The result of the calculation is close to the experimental result and with each combined standard uncertainty??<1%, the accuracy of EGSnrc is confirmed. The setup and methodology of this study can be employed to benchmark other Monte Carlo codes for the calculation of absorbed dose in radiotherapy. PMID:26389610

  1. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    NASA Astrophysics Data System (ADS)

    Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as the energy of the radiation source and the underlying photon cross sections as well as the I-value of media involved in the simulation. The combined standard uncertainty of the Monte Carlo calculation yields 0.78% as a conservative estimation. The result of the calculation is close to the experimental result and with each combined standard uncertainty??<1%, the accuracy of EGSnrc is confirmed. The setup and methodology of this study can be employed to benchmark other Monte Carlo codes for the calculation of absorbed dose in radiotherapy.

  2. Verification of Monte Carlo transport codes against measured small angle p-, d-, and t-emission in carbon fragmentation at 600 MeV/nucleon

    E-print Network

    B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik

    2015-02-05

    Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.

  3. Verification of Monte Carlo transport codes against measured small angle p-, d-, and t-emission in carbon fragmentation at 600 MeV/nucleon

    SciTech Connect

    Abramov, B. M.; Alekseev, P. N.; Borodin, Yu. A.; Bulychjov, S. A.; Dukhovskoy, I. A.; Krutenkova, A. P.; Martemianov, M. A.; Matsyuk, M. A.; Turdakina, E. N.; Khanov, A. I.; Mashnik, Stepan Georgievich

    2015-02-03

    Momentum spectra of hydrogen isotopes have been measured at 3.5° from 12C fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.

  4. Monte Carlo random walk simulation of electron transport in confined porous TiO2 as a promising candidate for photo-electrode of nano-crystalline solar cells

    NASA Astrophysics Data System (ADS)

    Javadi, M.; Abdi, Y.

    2015-08-01

    Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO2. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO2 used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, we demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ˜1 ?m2 and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.

  5. Dirac tensor with heavy photon

    SciTech Connect

    Bytev, V. V.; Kuraev, E. A.; Scherbakova, E. S.

    2013-03-15

    For the large-angle hard-photon emission by initial leptons in the process of high-energy annihilation of e{sup +}e{sup -} to hadrons, the Dirac tensor is obtained by taking the lowest-order radiative corrections into account. The case of large-angle emission of two hard photons by initial leptons is considered. In the final result, the kinematic case of collinear emission of hard photons and soft virtual and real photons is included; it can be used for the construction of Monte-Carlo generators.

  6. Use of MOSFET dosimeters to validate Monte Carlo radiation treatment calculation in an anthropomorphic phantom

    NASA Astrophysics Data System (ADS)

    Juste, Belén; Miró, R.; Abella, V.; Santos, A.; Verdú, Gumersindo

    2015-11-01

    Radiation therapy treatment planning based on Monte Carlo simulation provide a very accurate dose calculation compared to deterministic systems. Nowadays, Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy to verify the received dose by patients. In the present work, we have used the MCNP6 (Monte Carlo N-Particle transport code) to simulate the irradiation of an anthropomorphic phantom (RANDO) with a medical linear accelerator. The detailed model of the Elekta Precise multileaf collimator using a 6 MeV photon beam was designed and validated by means of different beam sizes and shapes in previous works. To include in the simulation the RANDO phantom geometry a set of Computer Tomography images of the phantom was obtained and formatted. The slices are input in PLUNC software, which performs the segmentation by defining anatomical structures and a Matlab algorithm writes the phantom information in MCNP6 input deck format. The simulation was verified and therefore the phantom model and irradiation was validated throughout the comparison of High-Sensitivity MOSFET dosimeter (Best medical Canada) measurements in different points inside the phantom with simulation results. On-line Wireless MOSFET provide dose estimation in the extremely thin sensitive volume, so a meticulous and accurate validation has been performed. The comparison show good agreement between the MOSFET measurements and the Monte Carlo calculations, confirming the validity of the developed procedure to include patients CT in simulations and approving the use of Monte Carlo simulations as an accurate therapy treatment plan.

  7. Parallel Monte Carlo reactor neutronics

    SciTech Connect

    Blomquist, R.N.; Brown, F.B.

    1994-03-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

  8. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    SciTech Connect

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.

  9. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  10. Photon-photon collisions

    SciTech Connect

    Burke, D.L.

    1982-10-01

    Studies of photon-photon collisions are reviewed with particular emphasis on new results reported to this conference. These include results on light meson spectroscopy and deep inelastic e..gamma.. scattering. Considerable work has now been accumulated on resonance production by ..gamma gamma.. collisions. Preliminary high statistics studies of the photon structure function F/sub 2//sup ..gamma../(x,Q/sup 2/) are given and comments are made on the problems that remain to be solved.

  11. Extraordinary Photon Transport by Near-Field Coupling of a Nanostructured Metamaterial with a Graphene-Covered Plate

    E-print Network

    Chang, Jui-Yung; Wang, Liping

    2015-01-01

    Coupled surface plasmon/phonon polaritons and hyperbolic modes are known to enhance radiative transport across nanometer vacuum gaps but usually require identical materials. It becomes crucial to achieve strong near-field energy transfer between dissimilar materials for applications like near-field thermophotovoltaic and thermal rectification. In this work, we theoretically demonstrate extraordinary near-field radiative transport between a nanostructured metamaterial emitter and a graphene-covered planar receiver. Strong near-field coupling with two orders of magnitude enhancement in the spectral heat flux is achieved at the gap distance of 20 nm. By carefully selecting the graphene chemical potential and doping levels of silicon nanohole emitter and silicon plate receiver, the total near-field radiative heat flux can reach about 500 times higher than the far-field blackbody limit between 400 K and 300 K. The physical mechanisms are elucidated by the near-field surface plasmon coupling with fluctuational elec...

  12. A Monte-Carlo maplet for the study of the optical properties of biological tissues

    NASA Astrophysics Data System (ADS)

    Yip, Man Ho; Carvalho, M. J.

    2007-12-01

    Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.

  13. Treating electron transport in MCNP{sup trademark}

    SciTech Connect

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. The theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.

  14. National Photonics Skills Standard for Technicians.

    ERIC Educational Resources Information Center

    Center for Occupational Research and Development, Inc., Waco, TX.

    This document defines "photonics" as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of photonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's communication…

  15. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    SciTech Connect

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.

  16. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.

  17. Application of Dynamic Monte Carlo Technique in Proton Beam Radiotherapy using Geant4 Simulation Toolkit 

    E-print Network

    Guan, Fada 1982-

    2012-04-27

    Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the ...

  18. Photon-photon collisions

    SciTech Connect

    Brodsky, S.J.

    1988-07-01

    Highlights of the VIIIth International Workshop on Photon-Photon Collisions are reviewed. New experimental and theoretical results were reported in virtually every area of ..gamma gamma.. physics, particularly in exotic resonance production and tests of quantum chromodynamics where asymptotic freedom and factorization theorems provide predictions for both inclusive and exclusive ..gamma gamma.. reactions at high momentum transfer. 73 refs., 12 figs.

  19. Some new results on electron transport in the atmosphere. [Monte Carlo calculation of penetration, diffusion, and slowing down of electron beams in air

    NASA Technical Reports Server (NTRS)

    Berger, M. J.; Seltzer, S. M.; Maeda, K.

    1972-01-01

    The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.

  20. Pulse and energy pulse height tally comparison in breast dosimetry with Monte Carlo radiation transport codes: MCNP5 and PENEASY(2005).

    PubMed

    Ramos, M; Ferrer, S; Verdu, G

    2005-01-01

    The authors present a review of tallying processes with non-Boltzmann tallies under Monte Carlo simulations. A comparison between different pulse and energy pulse height tallies has been done with MCNP5 code and PENEASY, a user-friendly version of PENELOPE code. Several simulations have been done for estimating the pulse and energy deposited spectra in a polymethyl-methacrilate (PMMA) phantom used during quality control testing in digital mammography. In the case of MCNP5, the arbitrary energy-loss which is activated by default for particles just crossing the detector has been removed for comparing the efficiency of the tally. PENEASY works similarly, counting all scores which have or have not deposited energy in the phantom. A correction has been done to the code to remove this scoring. As derived from the results, the deposited energy has been estimated as 3.73369e-3 MeV/particle for MCNP5 and 3.25468e-3 MeV/particle for PENASY. Further studies are necessary to obtain more accurate results modeling the compression plate and the imaging system. Pulse and energy pulse height spectra are still tallies under development and all effort must be done to understand the tallying process under different applications. PMID:17282861

  1. Photon transport of the superradiant TeraFERMI THz beamline at the FERMI free-electron laser.

    PubMed

    Svetina, Cristian; Mahne, Nicola; Raimondi, Lorenzo; Perucchi, Andrea; Di Pietro, Paola; Lupi, Stefano; Schmidt, Bernhard; Zangrando, Marco

    2016-01-01

    TeraFERMI is the new terahertz (THz) beamline for pump-probe studies on the femtosecond time-scale, under construction at the FERMI free-electron laser (FEL) facility in Trieste, Italy. The beamline will take advantage of the coherent radiation emitted by the spent electrons from the FEL undulators, before being dumped. This will result in short, coherent, high-power THz pulses to be used as a pump beam, in order to modulate structural properties of matter, thereby inducing phase transitions. The TeraFERMI beamline collects THz radiation in the undulator hall and guides it along a beam pipe which is approximately 30?m long, extending across the safety hutch and two shielding walls. Here the optical design, which will allow the efficient transport of the emitted THz radiation in the experimental hall, is presented. PMID:26698051

  2. Accuracy of retinal oximetry: a Monte Carlo investigation

    NASA Astrophysics Data System (ADS)

    Liu, Wenzhong; Jiao, Shuliang; Zhang, Hao F.

    2013-06-01

    Retinal hemoglobin oxygen saturation (sO2) level is believed to be associated with the pathophysiology of several leading blinding diseases. Methods to properly measure retinal sO have been investigated for decades; however, the accuracy of retinal oximetry is still considered to be limited. The Monte Carlo simulation of photon transport in retina to examine how the accuracy of retinal oximetry is affected by local parameters is discussed. Fundus photography was simulated in a multilayer retinal model, in which a single vessel segment with 0.7 sO2 was embedded, at six optical wavelengths. Then, 200 million photons were traced in each simulation to ensure statistically stable results. The optical reflectance and energy deposit were recorded to measure sO using both the reflection method (existing retinal oximetry) and a new absorption method, photoacoustic ophthalmoscopy (PAOM). By varying the vessel diameter and melanin concentration in the retinal pigment epithelium, the relative error of sO measurement in the reflection method increased with increasing vessel diameter and melanin concentration; in comparison, the sO measurement was insensitive to these two parameters in PAOM. The results suggest that PAOM potentially can be a more accurate tool in quantifying retinal sO.

  3. Spectral backward Monte Carlo method for surface infrared image simulation

    NASA Astrophysics Data System (ADS)

    Sun, Haifeng; Xia, Xinlin; Sun, Chuang; Chen, Xue

    2014-11-01

    The surface infrared radiation is an important part that contributes to the infrared image of the airplane. The Monte Carlo method for the infrared image calculation is suitable for the complex geometry of targets like airplanes. The backward Monte Carlo method is prior to the forward Monte Carlo method for the usually long distance between targets and the detector. Similar to the non-gray absorbing media, the random number relation is developed for the radiation of the spectral surface. In the backward Monte Carlo method, one random number that reverses the wave length (or wave number) may result deferent wave numbers for targets' surface elements on the track of a photon bundle. Through the manipulation of the densities of a photon bundles in arbitrary small intervals near wave numbers, all the wave lengths corresponding to one random number on the targets' surface elements on the track of the photon bundle are kept the same to keep the balance of the energy of the photon bundle. The model developed together with the energy partition model is incorporated into the backward Monte Carlo method to form the spectral backward Monte Carlo method. The developed backward Monte Carlo method is used to calculate the infrared images of a simple configuration with two gray spectral bands, and the efficiency of it is validated by compared the results of it to that of the non-spectral backward Monte Carlo method . Then the validated spectral backward Monte Carlo method is used to simulate the infrared image of the SDM airplane model with spectral surface, and the distribution of received infrared radiation flux of pixels in the detector is analyzed.

  4. Improvement of Photon Buildup Factors for Radiological Assessment

    SciTech Connect

    F.G. Schirmers

    2006-07-01

    Slant-path buildup factors for photons between 1 keV and 10 MeV for nine radiation shielding materials (air, aluminum, concrete, iron, lead, leaded glass, polyethylene, stainless steel, and water) are calculated with the most recent cross-section data available using Monte Carlo and discrete ordinates methods. Discrete ordinates calculations use a 244-group energy structure that is based on previous research at Los Alamos National Laboratory (LANL), but extended with the results of this thesis, and its focused studies on low-energy photon transport and the effects of group widths in multigroup calculations. Buildup factor calculations in discrete ordinates benefit from coupled photon/electron cross sections to account for secondary photon effects. Also, ambient dose equivalent (herein referred to as dose) buildup factors were analyzed at lower energies where corresponding response functions do not exist in literature. The results of these studies are directly applicable to radiation safety at LANL, where the dose modeling tool Pandemonium is used to estimate worker dose in plutonium handling facilities. Buildup factors determined in this thesis will be used to enhance the code's modeling capabilities, but should be of interest to the radiation shielding community.

  5. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    SciTech Connect

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  6. VERIFICATION OF THE SHIFT MONTE CARLO CODE

    SciTech Connect

    Sly, Nicholas; Mervin, Mervin Brenden; Mosher, Scott W; Evans, Thomas M; Wagner, John C; Maldonado, G. Ivan

    2012-01-01

    Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a fully-functional parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift s Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP

  7. Development of GPU-based Monte Carlo code for fast CT imaging dose calculation on CUDA Fermi architecture

    SciTech Connect

    Liu, T.; Du, X.; Ji, W.; Xu, X. G.

    2013-07-01

    This paper describes the development of a Graphics Processing Unit (GPU) accelerated Monte Carlo photon transport code, ARCHER{sub GPU}, to perform CT imaging dose calculations with good accuracy and performance. The code simulates interactions of photons with heterogeneous materials. It contains a detailed CT scanner model and a family of patient phantoms. Several techniques are used to optimize the code for the GPU architecture. In the accuracy and performance test, a 142 kg adult male phantom was selected, and the CT scan protocol involved a whole-body axial scan, 20-mm x-ray beam collimation, 120 kVp and a pitch of 1. A total of 9 x 108 photons were simulated and the absorbed doses to 28 radiosensitive organs/tissues were calculated. The average percentage difference of the results obtained by the general-purpose production code MCNPX and ARCHER{sub GPU} was found to be less than 0.38%, indicating an excellent agreement. The total computation time was found to be 8,689, 139 and 56 minutes for MCNPX, ARCHER{sub CPU} (6-core) and ARCHER{sub GPU}, respectively, indicating a decent speedup. Under a recent grant funding from the NIH, the project aims at developing a Monte Carlo code with the capability of sub-minute CT organ dose calculations. (authors)

  8. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    NASA Astrophysics Data System (ADS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  9. DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION

    EPA Science Inventory

    A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...

  10. Variance Reduction Techniques for Implicit Monte Carlo Simulations 

    E-print Network

    Landman, Jacob Taylor

    2013-09-19

    The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...

  11. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  12. TRANSPORTATION TRANSPORTATION

    E-print Network

    TEXASTRANS TEXAS TRANSPORTATION HALL HONOR OF HALL HONOR OF TEXASTRAN HALL HONOR OF TEXASTRAN HALL HONOR OF Inductees #12;2 TEXAS TRANSPORTATION HALL HONOR OF L NOR OF Texas is recognized as having one of the finest multimodal transportation systems in the world. The existence of this system has been key

  13. The new VARSKIN 4 photon skin dosimetry model.

    PubMed

    Hamby, D M; Lodwick, C J; Palmer, T S; Reese, S R; Higley, K A; Caffrey, J A; Sherbini, S; Saba, M; Bush-Goddard, S P

    2013-01-01

    A new photon skin dosimetry model, described here, was developed as the basis for the enhanced VARSKIN 4 thin tissue dosimetry code. The model employs a point-kernel method that accounts for charged particle build-up, photon attenuation and off-axis scatter. Early comparisons of the new model against Monte Carlo particle transport simulations show that VARSKIN 4 is highly accurate for very small sources on the skin surface, although accuracy at shallow depths is compromised for radiation sources that are on clothing or otherwise elevated from the skin surface. Comparison results are provided for a one-dimensional point source, a two-dimensional disc source and three-dimensional sphere, cylinder and slab sources. For very small source dimensions and sources in contact with the skin, comparisons reveal that the model is highly predictive. With larger source dimensions, air gaps or the addition of clothing between the source and skin; however, VARSKIN 4 yields over-predictions of dose by as much as a factor of 2 to 3. These cursory Monte Carlo comparisons confirm that significant accuracy improvements beyond the previous version were achieved for all geometries. Improvements were obtained while retaining the VARSKIN characteristic user convenience and rapid performance. PMID:23070483

  14. Rapid Monte Carlo simulation of detector DQE(f)

    PubMed Central

    Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric

    2014-01-01

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 ? 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 107 reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 106 CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities. PMID:24593734

  15. Rapid Monte Carlo simulation of detector DQE(f)

    SciTech Connect

    Star-Lack, Josh Sun, Mingshan; Abel, Eric; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca

    2014-03-15

    Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{sup 7} ? 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 10{sup 7} reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 10{sup 6} CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities.

  16. Dosimetric accuracy of a deterministic radiation transport based {sup 192}Ir brachytherapy treatment planning system. Part II: Monte Carlo and experimental verification of a multiple source dwell position plan employing a shielded applicator

    SciTech Connect

    Petrokokkinos, L.; Zourari, K.; Pantelis, E.; Moutsatsos, A.; Karaiskos, P.; Sakelliou, L.; Seimenis, I.; Georgiou, E.; Papagiannis, P.

    2011-04-15

    Purpose: The aim of this work is the dosimetric validation of a deterministic radiation transport based treatment planning system (BRACHYVISION v. 8.8, referred to as TPS in the following) for multiple {sup 192}Ir source dwell position brachytherapy applications employing a shielded applicator in homogeneous water geometries. Methods: TPS calculations for an irradiation plan employing seven VS2000 {sup 192}Ir high dose rate (HDR) source dwell positions and a partially shielded applicator (GM11004380) were compared to corresponding Monte Carlo (MC) simulation results, as well as experimental results obtained using the VIP polymer gel-magnetic resonance imaging three-dimensional dosimetry method with a custom made phantom. Results: TPS and MC dose distributions were found in agreement which is mainly within {+-}2%. Considerable differences between TPS and MC results (greater than 2%) were observed at points in the penumbra of the shields (i.e., close to the edges of the ''shielded'' segment of the geometries). These differences were experimentally verified and therefore attributed to the TPS. Apart from these regions, experimental and TPS dose distributions were found in agreement within 2 mm distance to agreement and 5% dose difference criteria. As shown in this work, these results mark a significant improvement relative to dosimetry algorithms that disregard the presence of the shielded applicator since the use of the latter leads to dosimetry errors on the order of 20%-30% at the edge of the ''unshielded'' segment of the geometry and even 2%-6% at points corresponding to the potential location of the target volume in clinical applications using the applicator (points in the unshielded segment at short distances from the applicator). Conclusions: Results of this work attest the capability of the TPS to accurately account for the scatter conditions and the increased attenuation involved in HDR brachytherapy applications employing multiple source dwell positions and partially shielded applicators.

  17. Photonic Floquet topological insulators.

    PubMed

    Rechtsman, Mikael C; Zeuner, Julia M; Plotnik, Yonatan; Lumer, Yaakov; Podolsky, Daniel; Dreisow, Felix; Nolte, Stefan; Segev, Mordechai; Szameit, Alexander

    2013-04-11

    Topological insulators are a new phase of matter, with the striking property that conduction of electrons occurs only on their surfaces. In two dimensions, electrons on the surface of a topological insulator are not scattered despite defects and disorder, providing robustness akin to that of superconductors. Topological insulators are predicted to have wide-ranging applications in fault-tolerant quantum computing and spintronics. Substantial effort has been directed towards realizing topological insulators for electromagnetic waves. One-dimensional systems with topological edge states have been demonstrated, but these states are zero-dimensional and therefore exhibit no transport properties. Topological protection of microwaves has been observed using a mechanism similar to the quantum Hall effect, by placing a gyromagnetic photonic crystal in an external magnetic field. But because magnetic effects are very weak at optical frequencies, realizing photonic topological insulators with scatter-free edge states requires a fundamentally different mechanism-one that is free of magnetic fields. A number of proposals for photonic topological transport have been put forward recently. One suggested temporal modulation of a photonic crystal, thus breaking time-reversal symmetry and inducing one-way edge states. This is in the spirit of the proposed Floquet topological insulators, in which temporal variations in solid-state systems induce topological edge states. Here we propose and experimentally demonstrate a photonic topological insulator free of external fields and with scatter-free edge transport-a photonic lattice exhibiting topologically protected transport of visible light on the lattice edges. Our system is composed of an array of evanescently coupled helical waveguides arranged in a graphene-like honeycomb lattice. Paraxial diffraction of light is described by a Schrödinger equation where the propagation coordinate (z) acts as 'time'. Thus the helicity of the waveguides breaks z-reversal symmetry as proposed for Floquet topological insulators. This structure results in one-way edge states that are topologically protected from scattering. PMID:23579677

  18. User Manual Photon Transport Simulator

    E-print Network

    Chapman, Glenn H.

    ..................................................................................................................................1 Software Methodology....................................................................................................................5 PROGRAM OPERATION PROGRAM THEORY AND STRUCTURE

  19. The Role of Fast Neutron-Induced Photon Production Data in the Search for Oil

    SciTech Connect

    Duijvestijn, M.C.; Hogenbirk, A.; Koning, A.J.

    2005-03-15

    The sensitivity of coupled neutron-photon transport simulations to the underlying nuclear data is studied for oil well logging applications using the Monte Carlo radiation transport code MCNP4C. Results obtained with the JEF-2.2, ENDF/B-VI.5, and JENDL-3.2 data libraries reveal large discrepancies, confirming immediately the importance of well-established nuclear data. In order to refine this conclusion, the impact of several nuclides is determined by varying their neutron and photon total cross sections. As a next step, neutron cross sections and photon production cross sections are modified per reaction channel to identify the most important nuclear reactions playing a role in C/O logging. The influence of neutron and photon angular distributions is studied as well. The outcome of this analysis is a list of nuclear reactions that have a significant impact on borehole logging tool simulations in different environments and, therefore, would deserve much attention in the construction of a new library for oil well logging.

  20. Assessment of calibration parameters for an aerial gamma spectrometry system using Monte-Carlo technique.

    PubMed

    Srinivasan, P; Raman, Anand; Sharma, D N

    2002-04-01

    During a radiation emergency subsequent to a nuclear accident or weapon fallout, quick assessment of the ground contamination and the resulting exposure is of prime importance in planning and execution of effective counter measures. For an online assessment of ground contamination, it is essential to calibrate the detector system for several parameters viz. the source energy, source deployment matrix, the flight altitude and position above the contaminated surface. This article discusses the methodology to predict all the necessary parameters like photon fluence at various altitudes, the photo-peak counts in different energy windows, Air to Ground Correlation Factors IAGCF) and the dose rate at any height due to air scattered gamma ray photons. The methodology includes generation of theoretically simulated gamma spectra at a required detector position for a given source distribution on the ground using Monte-Carlo method provided by general purpose coupled neutron/photon transport code (MCNP CCC-200). Thus generated gamma spectra are analyzed to arrive at the required parameters mentioned above. PMID:15900666

  1. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  2. Single-photon emission tomography imaging of serotonin transporters in the non-human primate brain with the selective radioligand [(123)I]IDAM.

    PubMed

    Acton, P D; Kung, M P; Mu, M; Plössl, K; Hou, C; Siciliano, M; Oya, S; Kung, H F

    1999-08-01

    A new radioligand, 5-iodo-2-[[2-2-[(dimethylamino)methyl]phenyl]thio]benzyl alcohol ([(123)I]IDAM), has been developed for selective single-photon emission tomography (SPET) imaging of SERT. In vitro binding studies suggest a high selectivity of IDAM for SERT (K(i)=0.097 nM), with considerably lower affinities for norepinephrine and dopamine transporters (NET K(i)= 234 nM and DAT K(i)>10 microM, respectively). In this study the biodistribution of SERT in the baboon brain was investigated in vivo using [(123)I]IDAM and SPET imaging. Dynamic sequences of SPET scans were performed on three female baboons (Papio anubis) after injection of 555 MBq of [(123)I]IDAM. Displacing doses (1 mg/kg) of the selective SERT ligand (+)McN5652 were administered 90-120 min after injection of [(123)I]IDAM. Similar studies were performed using a NET inhibitor, nisoxetine, and a DAT blocker, methylphenidate. After 60-120 min, the regional distribution of tracer within the brain reflected the characteristic distribution of SERT, with the highest uptake in the midbrain area (hypothalamus, raphe nucleus, substantia nigra), and the lowest uptake in the cerebellum (an area presumed free of SERT). Peak specific binding in the midbrain occurred at 120 min, with a ratio to the cerebellum of 1.80+/-0.13. At 30 min, 85% of the radioactivity in the blood was metabolite. Following injection of a competing SERT ligand, (+)McN5652, the tracer exhibited rapid washout from areas with high concentrations of SERT (dissociation rate constant in the midbrain, averaged over three baboons, k(off)=0. 025+/-0.002 min(-1)), while the cerebellar activity distribution was undisturbed (washout rate 0.0059+/- 0.0003 min(-1)). Calculation of tracer washout rate pixel-by-pixel enabled the generation of parametric images of the dissociation rate constant. Similar studies using nisoxetine and methylphenidate had no effect on the distribution of [(123)I]IDAM in the brain. These results suggest that [(123)I]IDAM is suitable for selective SPET imaging of SERT in the primate brain, with high contrast, favorable kinetics, and negligible binding to either NET or DAT. PMID:10436198

  3. Analyzing luminescent solar concentrators with front-facing photovoltaic cells using weighted Monte Carlo ray tracing

    NASA Astrophysics Data System (ADS)

    Woei Leow, Shin; Corrado, Carley; Osborn, Melissa; Isaacson, Michael; Alers, Glenn; Carter, Sue A.

    2013-06-01

    Luminescent solar concentrators (LSC) collect ambient light from a broad range of angles and concentrate the captured light onto photovoltaic (PV) cells. LSCs with front-facing cells collect direct and indirect sunlight ensuring a gain factor greater than one. The flexible placement and percentage coverage of PV cells on the LSC panel allow for layout adjustments to be made in order to balance re-absorption losses and the level of light concentration desired. A weighted Monte Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in the LSC to aid in design optimization. The program imports measured absorption/emission spectra of an organic luminescent dye (LR305), the transmission coefficient, and refractive index of acrylic as parameters that describe the system. Simulations suggest that for LR305, 8-10 cm of luminescent material surrounding the PV cell yields the highest increase in power gain per unit area of LSC added, thereby determining the ideal spacing between PV cells in the panel. For rectangular PV cells, results indicate that for each centimeter of PV cell width, an additional increase of 0.15 mm to the waveguide thickness is required to efficiently transport photon collected by the LSC to the PV cell with minimal loss.

  4. Investigation of systematic uncertainties in Monte Carlo-calculated beam quality correction factors.

    PubMed

    Wulff, J; Heverhagen, J T; Zink, K; Kawrakow, I

    2010-08-21

    Modern Monte Carlo codes allow for the calculation of ion chamber specific beam quality correction factors k(Q), which are needed for dosimetry in radiotherapy. While statistical (type A) uncertainties of the calculated data can be minimized sufficiently, the influence of systematic (type B) uncertainties is mostly unknown. This study presents an investigation of systematic uncertainties of Monte Carlo-based k(Q) values for a NE2571 thimble ion chamber, calculated with the EGSnrc system. Starting with some general investigation on transport parameter settings, the influence of geometry and source variations is studied. Furthermore, a systematic examination of uncertainties due to cross section is introduced by determining the sensitivity of k(Q) results to changes in cross section data. For this purpose, single components of the photon cross sections and the mean excitation energy I in the electron stopping powers are varied. The corresponding sensitivities are subsequently applied with information of standard uncertainties for the cross section data found in the literature. It turns out that the calculation of k(Q) factors with EGSnrc is mostly insensitive to transport settings within the statistical uncertainties of approximately 0.1%. Severe changes in the dimensions of the chamber lead to comparatively small, insignificant changes. Further, the inclusion of realistic beam models, delivering a complete phase space instead of simple photon spectra, does not significantly influence the result. However, the uncertainties in electron cross sections have an impact on the final uncertainty of k(Q) to a comparatively large degree. For the NE2571 chamber investigated in this work, this uncertainty amounts to 0.4% at 24 MV, decreasing to 0.2% at 6 MV. PMID:20668340

  5. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    SciTech Connect

    Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.

  6. Pion and electromagnetic contribution to dose: Comparisons of HZETRN to Monte Carlo results and ISS data

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Reddell, Brandon; Bahadori, Amir; Norman, Ryan B.; Badavi, Francis F.

    2013-07-01

    Recent work has indicated that pion production and the associated electromagnetic (EM) cascade may be an important contribution to the total astronaut exposure in space. Recent extensions to the deterministic space radiation transport code, HZETRN, allow the production and transport of pions, muons, electrons, positrons, and photons. In this paper, the extended code is compared to the Monte Carlo codes, Geant4, PHITS, and FLUKA, in slab geometries exposed to galactic cosmic ray (GCR) boundary conditions. While improvements in the HZETRN transport formalism for the new particles are needed, it is shown that reasonable agreement on dose is found at larger shielding thicknesses commonly found on the International Space Station (ISS). Finally, the extended code is compared to ISS data on a minute-by-minute basis over a seven day period in 2001. The impact of pion/EM production on exposure estimates and validation results is clearly shown. The Badhwar-O'Neill (BO) 2004 and 2010 models are used to generate the GCR boundary condition at each time-step allowing the impact of environmental model improvements on validation results to be quantified as well. It is found that the updated BO2010 model noticeably reduces overall exposure estimates from the BO2004 model, and the additional production mechanisms in HZETRN provide some compensation. It is shown that the overestimates provided by the BO2004 GCR model in previous validation studies led to deflated uncertainty estimates for environmental, physics, and transport models, and allowed an important physical interaction (?/EM) to be overlooked in model development. Despite the additional ?/EM production mechanisms in HZETRN, a systematic under-prediction of total dose is observed in comparison to Monte Carlo results and measured data.

  7. Photonuclear Physics in Radiation Transport - II: Implementation

    SciTech Connect

    White, M.C.; Little, R.C.; Chadwick, M.B.; Young, P.G.; MacFarlane, R.E.

    2003-06-15

    This is the second of two companion papers. The first paper describes model calculations and nuclear data evaluations of photonuclear reactions on isotopes of C, O, Al, Si, Ca, Fe, Cu, Ta, W, and Pb for incident photon energies up to 150 MeV. This paper describes the steps taken to process these files into transport libraries and to update the Monte Carlo N-Particle (MCNP) and MCNPX radiation transport codes to use tabular photonuclear reaction data. The evaluated photonuclear data files are created in the standard evaluated nuclear data file (ENDF) format. These files must be processed by the NJOY data processing system into A Compact ENDF (ACE) files suitable for radiation transport calculations. MCNP and MCNPX have been modified to use these new data in a self-consistent and fully integrated manner. Verification problems were used at each step along the path to check the integrity of the methodology. The resulting methodology and tools provide a comprehensive system for using photonuclear data in radiation transport calculations. Also described are initial validation simulations used to benchmark several of the photonuclear transport tables.

  8. ENVIRONMENTAL MODELING: 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS

    E-print Network

    Dimov, Ivan

    SIMULATIONS TO THE PROBLEM OF AIR POLLUTION TRANSPORT 3 1.1 The Danish Eulerian Model #12;Chapter 1 APPLICATIONS: MONTE CARLO SENSITIVITY SIMULATIONS TO THE PROBLEM OF AIR POLLUTION of pollutants in a real-live scenario of air-pollution transport over Europe. First, the developed technique

  9. Changes in dose with segmentation of breast tissues in Monte Carlo calculations for low-energy brachytherapy

    SciTech Connect

    Sutherland, J. G. H.; Thomson, R. M.; Rogers, D. W. O.

    2011-08-15

    Purpose: To investigate the use of various breast tissue segmentation models in Monte Carlo dose calculations for low-energy brachytherapy. Methods: The EGSnrc user-code BrachyDose is used to perform Monte Carlo simulations of a breast brachytherapy treatment using TheraSeed Pd-103 seeds with various breast tissue segmentation models. Models used include a phantom where voxels are randomly assigned to be gland or adipose (randomly segmented), a phantom where a single tissue of averaged gland and adipose is present (averaged tissue), and a realistically segmented phantom created from previously published numerical phantoms. Radiation transport in averaged tissue while scoring in gland along with other combinations is investigated. The inclusion of calcifications in the breast is also studied in averaged tissue and randomly segmented phantoms. Results: In randomly segmented and averaged tissue phantoms, the photon energy fluence is approximately the same; however, differences occur in the dose volume histograms (DVHs) as a result of scoring in the different tissues (gland and adipose versus averaged tissue), whose mass energy absorption coefficients differ by 30%. A realistically segmented phantom is shown to significantly change the photon energy fluence compared to that in averaged tissue or randomly segmented phantoms. Despite this, resulting DVHs for the entire treatment volume agree reasonably because fluence differences are compensated by dose scoring differences. DVHs for the dose to only the gland voxels in a realistically segmented phantom do not agree with those for dose to gland in an averaged tissue phantom. Calcifications affect photon energy fluence to such a degree that the differences in fluence are not compensated for (as they are in the no calcification case) by dose scoring in averaged tissue phantoms. Conclusions: For low-energy brachytherapy, if photon transport and dose scoring both occur in an averaged tissue, the resulting DVH for the entire treatment volume is reasonably accurate because inaccuracies in photon energy fluence are compensated for by inaccuracies in localized dose scoring. If dose to fibroglandular tissue in the breast is of interest, then the inaccurate photon energy fluence calculated in an averaged tissue phantom will result in inaccurate DVHs and average doses for those tissues. Including calcifications necessitates the use of proper tissue segmentation.

  10. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    SciTech Connect

    Lloyd, S. A. M.; Ansbacher, W.

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are used to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.

  11. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  12. A Monte Carlo study of lung counting efficiency for female workers of different breast sizes using deformable phantoms

    NASA Astrophysics Data System (ADS)

    Hegenbart, L.; Na, Y. H.; Zhang, J. Y.; Urban, M.; Xu, X. George

    2008-10-01

    There are currently no physical phantoms available for calibrating in vivo counting devices that represent women with different breast sizes because such phantoms are difficult, time consuming and expensive to fabricate. In this work, a feasible alternative involving computational phantoms was explored. A series of new female voxel phantoms with different breast sizes were developed and ported into a Monte Carlo radiation transport code for performing virtual lung counting efficiency calibrations. The phantoms are based on the RPI adult female phantom, a boundary representation (BREP) model. They were created with novel deformation techniques and then voxelized for the Monte Carlo simulations. Eight models have been selected with cup sizes ranging from AA to G according to brassiere industry standards. Monte Carlo simulations of a lung counting system were performed with these phantoms to study the effect of breast size on lung counting efficiencies, which are needed to determine the activity of a radionuclide deposited in the lung and hence to estimate the resulting dose to the worker. Contamination scenarios involving three different radionuclides, namely Am-241, Cs-137 and Co-60, were considered. The results show that detector efficiencies considerably decrease with increasing breast size, especially for low energy photon emitting radionuclides. When the counting efficiencies of models with cup size AA were compared to those with cup size G, a difference of up to 50% was observed. The detector efficiencies for each radionuclide can be approximated by curve fitting in the total breast mass (polynomial of second order) or the cup size (power).

  13. Monte-Carlo Simulations: FLUKA vs. MCNPX

    SciTech Connect

    Oden, M.; Krasa, A.; Majerle, M.; Svoboda, O.; Wagner, V.

    2007-11-26

    Several experiments were performed at the Phasotron and Nuclotron accelerators in JINR Dubna in which spallation reactions and neutron transport were studied. The experimental results were checked against the predictions of the Monte-Carlo code MCNPX. The discrepancies at 1.5 GeV and 2 GeV on the 'Energy plus Transmutation' setup were observed. Therefore the experimental results were checked with another code-FLUKA.

  14. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  15. Recent critical study of photon production in hadronic collisions

    NASA Astrophysics Data System (ADS)

    Aurenche, P.; Guillet, J. Ph.; Pilon, E.; Werlen, M.; Fontannaz, M.

    2006-05-01

    In the light of the new prompt-photon data collected by PHENIX at RHIC and by D0 at the run II of the Tevatron, we revisit the world prompt-photon data, both inclusive and isolated, in hadronic collisions, and compare them with the NLO QCD calculations implemented in the Monte Carlo program JETPHOX.

  16. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  17. Anti-photon

    E-print Network

    Jacques Moret-Bailly

    2010-11-11

    Quantum electrodynamics corrects miscalculations of classical electrodynamics, but by introducing the pseudo-particle "photon" it is the source of errors whose practical consequences are serious. Thus W. E. Lamb disadvises the use of the word "photon" in an article whose this text takes the title. The purpose of this paper is neither a compilation, nor a critique of Lamb's paper: It adds arguments and applications to show that the use of this concept is dangerous while the semi-classical theory is always right provided that common errors are corrected: in particular, the classical field of electromagnetic energy is often, wrongly, considered as linear, so that Bohr's electron falls on the nucleus and photon counting is false. Using absolute energies and radiances avoids doing these errors. Quantum electrodynamics quantizes "normal modes" chosen arbitrarily among the infinity of sets of orthogonal modes of the electromagnetic field. Changing the choice of normal modes splits the photons which are pseudo-particles, not physical objects. Considering the photons as small particles interacting without pilot waves with single atoms, astrophysicists use Monte-Carlo computations for the propagation of light in homogeneous media while it works only in opalescent media as clouds. Thus, for instance, two theories abort while, they are validated using coherence and Einstein theories, giving a good interpretation of the rings of supernova remnant 1987A, and the spectrum found inside. The high frequency shifts of this spectrum can only result from a parametric interaction of light with excited atomic hydrogen which is found in many regions of the universe.

  18. Photon path length distributions for cloudy atmospheres from GOSAT satellite measurements

    NASA Astrophysics Data System (ADS)

    Kremmling, Beke; Penning de Vries, Marloes; Wagner, Thomas

    2014-05-01

    The presence of clouds in the atmosphere has significant influence on the photon paths of the scattered sunlight. Besides reflections of radiation at the cloud top, additional scattering events inside the cloud may occur and thus lengthening or shortening of the photon path in the atmosphere. Clouds consisting of multiple layers or patches may lead to a "ping pong" behaviour of the photons due to reflections at the individual surfaces. The objective of our study is the retrieval of photon path length distributions for various atmospheric cloud situations which will lead to a better understanding of the influence of clouds on the atmospheric radiative transport. Following principles from ground based photon path length retrieval (Funk et al., 2003), our research uses the combination of space based measurements of the oxygen A-band and radiative transfer simulations. The experimental spectra originate from the Japanese Greenhouse gases Observing SATellite (GOSAT), more precisely the Fourier Transform Spectrometer TANSO-FTS. Its high spectral resolution allows to almost completely resolve the individual absorption lines which is a prerequisite to our study. The Monte Carlo radiative transfer model McArtim (Deutschmann et al., 2011) is used to model the measured spectra. This model allows user-defined input for the altitude dependent cross sections and furthermore the incorporation of three dimensional cloud shapes and properties. From the simulation output and the sun-satellite geometry, photon path length distributions can be obtained. Distributions of photon path lengths are presented for a selection of GOSAT observations of entirely cloud covered atmospheres with similar measurement geometries.

  19. Direct Photons from a Hybrid Approach -- Exploring the parameter space

    E-print Network

    Bjoern Baeuchle; Marcus Bleicher

    2010-08-11

    Direct photon spectra are calculated within a transport+hydrodynamics hybrid approach, in which the high-density part of the transport evolution has been replaced by a 3+1-dimensional hydrodynamic calculation. We study the effects of changing the parameters of the two interfaces between the transport- and hydrodynamic descriptions on the resulting direct photon spectra.

  20. Photon absorptiometry

    SciTech Connect

    Velchik, M.G.

    1987-01-01

    Recently, there has been a renewed interest in the detection and treatment of osteoporosis. This paper is a review of the merits and limitations of the various noninvasive modalities currently available for the measurement of bone mineral density with special emphasis placed upon the nuclear medicine techniques of single-photon and dual-photon absorptiometry. The clinicians should come away with an understanding of the relative advantages and disadvantages of photon absorptiometry and its optimal clinical application. 49 references.

  1. Hard photon production and matrix-element parton-shower merging

    E-print Network

    Stefan Hoeche; Steffen Schumann; Frank Siegert

    2010-03-05

    We present a Monte-Carlo approach to prompt-photon production, where photons and QCD partons are treated democratically. The photon fragmentation function is modelled by an interleaved QCD+QED parton shower. This known technique is improved by including higher-order real-emission matrix elements. To this end, we extend a recently proposed algorithm for merging matrix elements and truncated parton showers. We exemplify the quality of the Monte-Carlo predictions by comparing them to measurements of the photon fragmentation function at LEP and to measurements of prompt photon and diphoton production from the Tevatron experiments.

  2. Split exponential track length estimator for Monte-Carlo simulations of small-animal radiation therapy

    NASA Astrophysics Data System (ADS)

    Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.

    2014-12-01

    We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.

  3. Photon detectors with gaseous amplification

    SciTech Connect

    Va`vra, J.

    1996-08-01

    Gaseous photon detectors, including very large 4{pi}-devices such as those incorporated in SLD and DELPHI, are finally delivering physics after many years of hard work. Photon detectors are among the most difficult devices used in physics experiments, because they must achieve high efficiency for photon transport and for the detection of single photoelectrons. Among detector builders, there is hardly anybody who did not make mistakes in this area, and who does not have a healthy respect for the problems involved. This point is stressed in this paper, and it is suggested that only a very small operating phase space is available for running gaseous photon detectors in a very large system with good efficiency and few problems. In this paper the authors discuss what was done correctly or incorrectly in first generation photon detectors, and what would be their recommendations for second generation detectors. 56 refs., 11 figs.

  4. Application of Cerenkov radiation generated in plastic optical fibers for therapeutic photon beam dosimetry.

    PubMed

    Jang, Kyoung Won; Yagi, Takahiro; Pyeon, Cheol Ho; Yoo, Wook Jae; Shin, Sang Hun; Jeong, Chiyoung; Min, Byung Jun; Shin, Dongho; Misawa, Tsuyoshi; Lee, Bongsoo

    2013-02-01

    A Cerenkov fiber-optic dosimeter (CFOD) is fabricated using plastic optical fibers to measure Cerenkov radiation induced by a therapeutic photon beam. We measured the Cerenkov radiation generated in optical fibers in various irradiation conditions to evaluate the usability of Cerenkov radiation for a photon beam therapy dosimetry. As a results, the spectral peak of Cerenkov radiation was measured at a wavelength of 515 nm, and the intensity of Cerenkov radiation increased linearly with increasing irradiated length of the optical fiber. Also, the intensity peak of Cerenkov radiation was measured in the irradiation angle range of 30 to 40 deg. In the results of Monte Carlo N-particle transport code simulations, the relationship between fluxes of electrons over Cerenkov threshold energy and energy deposition of a 6 MV photon beam had a nearly linear trend. Finally, percentage depth doses for the 6 MV photon beam could be obtained using the CFOD and the results were compared with those of an ionization chamber. Here, the mean dose difference was about 0.6%. It is anticipated that the novel and simple CFOD can be effectively used for measuring depth doses in radiotherapy dosimetry. PMID:23377008

  5. Folding Photons

    SciTech Connect

    Gregg, B. A.; van de Lagemaat, J.

    2012-05-01

    Scientists have shown that wrinkles and folds can be used to maximize the absorption of low-energy photons by efficiently redirecting them into a thin absorbing film. This inexpensive technique for structuring photonic substrates could be used to increase the efficiency of many organic photovoltaic cells.

  6. Monte Carlo evaluation of DNA fragmentation spectra induced by different radiation qualities.

    PubMed

    Alloni, D; Campa, A; Belli, M; Esposito, G; Mariotti, L; Liotta, M; Friedland, W; Paretzke, H; Ottolenghi, A

    2011-02-01

    The PARTRAC code has been developed constantly in the last several years. It is a Monte Carlo code based on an event-by-event description of the interactions taking place between the ionising radiation and liquid water, and in the present version simulates the transport of photons, electrons, protons, helium and heavier ions. This is combined with an atom-by-atom representation of the biological target, i.e. the DNA target model of a diploid human fibroblast in its interphase (genome of 6 Gigabase pairs). DNA damage is produced by the events of energy depositions, either directly, if they occur in the volume occupied by the sugar-phosphate backbone, or indirectly, if this volume is reached by radiation-induced radicals. This requires the determination of the probabilities of occurrence of DNA damage. Experimental data are essential for this determination. However, after the adjustment of the relevant parameters through the comparison of the simulation data with the DNA fragmentation induced by photon irradiation, the code has been used without further parameter adjustments, and the comparison with the fragmentation induced by charged particle beams has validated the code. In this paper, the results obtained for the DNA fragmentation induced by gamma rays and by charged particle beams of various LET are shown, with a particular attention to the production of very small fragments that are not detected in experiments. PMID:21084331

  7. Implementation of an efficient Monte Carlo calculation for CBCT scatter correction: phantom study.

    PubMed

    Watson, Peter G F; Mainegra-Hing, Ernesto; Tomic, Nada; Seuntjens, Jan

    2015-01-01

    Cone-beam computed tomography (CBCT) images suffer from poor image quality, in a large part due to contamination from scattered X-rays. In this work, a Monte Carlo (MC)-based iterative scatter correction algorithm was implemented on measured phantom data acquired from a clinical on-board CBCT scanner. An efficient EGSnrc user code (egs_cbct) was used to transport photons through an uncorrected CBCT scan of a Catphan 600 phantom. From the simulation output, the contribution from primary and scattered photons was estimated in each projection image. From these estimates, an iterative scatter correction was performed on the raw CBCT projection data. The results of the scatter correction were compared with the default vendor reconstruction. The scatter correction was found to reduce the error in CT number for selected regions of interest, while improving contrast-to-noise ratio (CNR) by 18%. These results demonstrate the performance of the proposed scatter correction algorithm in improving image quality for clinical CBCT images. PMID:26219003

  8. A Monte Carlo approach to assessing 147Pm in the liver of the adult phantom.

    PubMed

    Bhati, S

    1993-06-01

    A low-background phoswich detector is used to detect small amounts of 147Pm--a pure beta-emitting nuclide--present in the liver of an occupational worker. The assessment was based on the measurement of bremsstrahlung radiation produced by the beta particles in the tissue. Computer programs based on Monte Carlo techniques for photon transport have been developed to calculate the response of an external phoswich detector to 1) a 147Pm point source embedded in tissue-equivalent slabs of various thicknesses; and 2) various source distributions of 147Pm in the liver of an adult phantom. The goal is to theoretically calibrate the phoswich detector for each source distribution and to study the variation of maxima of the spectra with the depth of the source in the adult phantom liver and tissue-equivalent slabs. The initial bremsstrahlung photon distribution of 147Pm in water has been computed using Wyard's and Pratt's methods. These calculations have been compared with experimental measurements using Perspex acrylic sheet slabs. Good agreements have been noted when the initial bremsstrahlung spectrum is obtained by using Wyard's method. These results find applications in monitoring the liver burdens in occupational workers handling 147Pm-based radioluminous paints. PMID:8491620

  9. CTmod-a toolkit for Monte Carlo simulation of projections including scatter in computed tomography.

    PubMed

    Malusek, Alexandr; Sandborg, Michael; Carlsson, Gudrun Alm

    2008-05-01

    The CTmod toolkit is a set of C++ class libraries based on the CERN's application development framework ROOT. It uses the Monte Carlo method to simulate energy imparted to a CT-scanner detector array. Photons with a given angle-energy distribution are emitted from the X-ray tube approximated by a point source, transported through a phantom, and their contribution to the energy imparted per unit surface area of each detector element is scored. Alternatively, the scored quantity may be the fluence, energy fluence, plane fluence, plane energy fluence, or kerma to air in the center of each detector element. Phantoms are constructed from homogenous solids or voxel arrays via overlapping. Implemented photon interactions (photoelectric effect, coherent scattering, and incoherent scattering) are restricted to the energy range from 10 to 200keV. Variance reduction techniques include the collision density estimator and survival biasing combined with the Russian roulette. The toolkit has been used to estimate the amount of scatter in cone beam computed tomography and planar radiography. PMID:18276033

  10. A dose point kernel database using GATE Monte Carlo simulation toolkit for nuclear medicine applications: Comparison with other Monte Carlo codes

    SciTech Connect

    Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C.; Kagadis, George C.

    2012-08-15

    Purpose: GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Methods: Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., {sup 177}Lu, {sup 90}Y, and {sup 32}P, and (c) gamma emitting isotopes, e.g., {sup 111}In, {sup 131}I, {sup 125}I, and {sup 99m}Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. Results: GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data, which allowed us to produce a unique DPKs dataset using GATE. The dataset contains the total DPKs for {sup 67}Ga, {sup 68}Ga, {sup 90}Y, {sup 99m}Tc, {sup 111}In, {sup 123}I, {sup 124}I, {sup 125}I, {sup 131}I, {sup 153}Sm, {sup 177}Lu {sup 186}Re, and {sup 188}Re generated in water, bone, and lung. Conclusions: In this study, the authors have checked GATE's reliability for absorbed dose calculation when transporting different kind of particles, which indicates its robustness for dosimetry applications. A novel dataset of DPKs is provided, which can be applied in patient-specific dosimetry using analytical point kernel convolution algorithms.

  11. A new single-photon emission computed tomography (SPECT) imaging agent for serotonin transporters: [(125)I]Flip-IDAM, (2-((2-((dimethylamino)methyl)-4-iodophenyl)thio)phenyl)methanol.

    PubMed

    Zheng, Pinguan; Lieberman, Brian P; Ploessl, Karl; Lemoine, Laetitia; Miller, Sara; Kung, Hank F

    2013-02-01

    New ligands for in vivo brain imaging of serotonin transporter (SERT) with single photon emission tomography (SPECT) were prepared and evaluated. An efficient synthesis and radiolabeling of a biphenylthiol, FLIP-IDAM, 4, was accomplished. The affinity of FLIP-IDAM was evaluated by an in vitro inhibitory binding assay using [(125)I]-IDAM as radioligand in rat brain tissue homogenates (K(i) = 0.03 nM). New [(125)I]Flip-IDAM exhibited excellent binding affinity to SERT binding sites with a high hypothalamus to cerebellum ratio of 4 at 30 min post iv injection. The faster in vivo kinetics for brain uptake and a rapid washout from non-specific regions provide excellent signal to noise ratio. This new agent, when labeled with (123)I, may be a useful imaging agent for mapping SERT binding sites in the human brain. PMID:23265880

  12. Radiation Transport Calculations and Simulations

    SciTech Connect

    Fasso, Alberto; Ferrari, A.; /CERN

    2011-06-30

    This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.

  13. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  14. Dark Photon Search at BABAR

    SciTech Connect

    Greenwood, Ross N; /MIT /SLAC

    2012-09-07

    Presented is the current progress of a search for the signature of a dark photon or new particle using the BaBar data set. We search for the processes e{sup +}e{sup -} {yields} {gamma}{sub ISR}A{prime},A{prime} {yields} e{sup +}e{sup -} and e{sup +}e{sup -} {yields} {gamma}{sub ISR}{gamma}, {gamma} {yields} A{prime},A{prime} {yields} e{sup +}e{sup -}, where {gamma}{sub ISR} is an initial state radiated photon of energy E{sub {gamma}} >= 1 GeV. Twenty-five sets of Monte Carlo, simulating e{sup +}e{sup -} collisions at an energy of 10.58 GeV, were produced with different values of the A{prime} mass ranging from 100 MeV to 9.5 GeV. The mass resolution is calculated based on Monte Carlo simulations. We implement ROOT's Toolkit for Multivariate Analysis (TMVA), a machine learning tool that allows us to evaluate the signal character of events based on many of discriminating variables. TMVA training is conducted with samples of Monte Carlo as signal and a small portion of Run 6 as background. The multivariate analysis produces additional cuts to separate signal and background. The signal efficiency and sensitivity are calculated. The analysis will move forward to fit the background and scan the residuals for the narrow resonance peak of a new particle.

  15. Monte Carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C

    NASA Astrophysics Data System (ADS)

    Ay, M. R.; Shahriari, M.; Sarkar, S.; Adib, M.; Zaidi, H.

    2004-11-01

    The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra.

  16. Benchmark test of transport calculations of gold and nickel activation with implications for neutron kerma at Hiroshima.

    PubMed

    Hoshi, M; Hiraoka, M; Hayakawa, N; Sawada, S; Munaka, M; Kuramoto, A; Oka, T; Iwatani, K; Shizuma, K; Hasai, H

    1992-11-01

    A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a 252Cf fission neutron source to validate the use of the code for the energy spectrum analyses of Hiroshima atomic bomb neutrons. Nuclear data libraries used in the Monte Carlo neutron and photon transport code calculation were ENDF/B-III, ENDF/B-IV, LASL-SUB, and ENDL-73. The neutron moderators used were granite (the main component of which is SiO2, with a small fraction of hydrogen), Newlight [polyethylene with 3.7% boron (natural)], ammonium chloride (NH4Cl), and water (H2O). Each moderator was 65 cm thick. The neutron detectors were gold and nickel foils, which were used to detect thermal and epithermal neutrons (4.9 eV) and fast neutrons (> 0.5 MeV), respectively. Measured activity data from neutron-irradiated gold and nickel foils in these moderators decreased to about 1/1,000th or 1/10,000th, which correspond to about 1,500 m ground distance from the hypocenter in Hiroshima. For both gold and nickel detectors, the measured activities and the calculated values agreed within 10%. The slopes of the depth-yield relations in each moderator, except granite, were similar for neutrons detected by the gold and nickel foils. From the results of these studies, the Monte Carlo neutron and photon transport code was verified to be accurate enough for use with the elements hydrogen, carbon, nitrogen, oxygen, silicon, chlorine, and cadmium, and for the incident 252Cf fission spectrum neutrons. PMID:1399639

  17. Topological photonics

    E-print Network

    Lu, Ling

    The application of topology, the mathematics of conserved properties under continuous deformations, is creating a range of new opportunities throughout photonics. This field was inspired by the discovery of topological ...

  18. Photon generator

    DOEpatents

    Srinivasan-Rao, Triveni (Shoreham, NY)

    2002-01-01

    A photon generator includes an electron gun for emitting an electron beam, a laser for emitting a laser beam, and an interaction ring wherein the laser beam repetitively collides with the electron beam for emitting a high energy photon beam therefrom in the exemplary form of x-rays. The interaction ring is a closed loop, sized and configured for circulating the electron beam with a period substantially equal to the period of the laser beam pulses for effecting repetitive collisions.

  19. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  20. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    SciTech Connect

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  1. A Monte Carlo method for the PDF equations of turbulent flow

    E-print Network

    Pope, S. B.

    1980-01-01

    A Monte Carlo method is presented which simulates the transport equations of joint probability density functions (pdf's) in turbulent flows. (Finite-difference solutions of the equations are impracticable, mainly because ...

  2. Monte Carlo and thermal hydraulic coupling using low-order nonlinear diffusion acceleration

    E-print Network

    Herman, Bryan R. (Bryan Robert)

    2014-01-01

    Monte Carlo (MC) methods for reactor analysis are most often employed as a benchmark tool for other transport and diffusion methods. In this work, we identify and resolve a few of the issues associated with using MC as a ...

  3. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2009-01-01

    We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

  4. Kinetic Monte Carlo simulations of proton conductivity

    NASA Astrophysics Data System (ADS)

    Mas?owski, T.; Drzewi?ski, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Fr?czek, M.; Nordlund, K.; Kuronen, A.

    2014-07-01

    The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.

  5. WE-A-17A-04: Development of An Ultra-Fast Monte Carlo Dose Engine for High Dose Rate Brachytherapy

    SciTech Connect

    Tian, Z; Hrycushko, B; Stojadinovic, S; Jiang, S; Jia, X; Zhang, M

    2014-06-15

    Purpose: Current clinical brachytherapy dose calculations are based on AAPM TG43 guidelines, which approximate the patient geometry as a large water phantom. This ignores heterogeneities and tends to overestimate skin dose. Although Monte Carlo (MC) dose calculations have been recognized as the most accurate method, its associated long computational time is a major bottleneck for routine clinical applications. This work aims to develop a GPU-based ultra-fast MC dose engine (gBMC) for HDR brachytherapy to provide clinical users with accurate sub-minute dose calculations. Methods: Standard photon transport with discrete events including: Compton scattering, Rayleigh scattering and Photoelectric effect, was implemented. Secondary electrons were transported under the continuous slowing down approximation. To reduce the GPU thread divergence, photons and electrons were separately transported. Transport of photons was grouped according to energy. The source model in gBMC can be either a phase-space file generated using Geant4 or a parameterized source model. This dose engine was validated against TG43 in a water phantom and against Geant4 calculations in heterogeneous patient geometries. Results: A phase space file was generated for the Varian VS2000 Ir-192 source. In a water phantom, the calculated radial dose function was within 0.6% of the TG43 calculations for radial distances from 1 cm to 20 cm. The anisotropy functions were within 1% for radial distances from 1 cm to 20 cm except for polar angles larger than 173°. Local point-dose differences were within 2%. In a Mammosite breast cancer case with 22 dwell locations, gBMC and Geant4 isodose lines compared well. The computation time was about 28 seconds using the phase-space file source and 20 seconds using the parameterized source to simulate 1 billion particles, yielding less than 1% statistical uncertainty. Conclusion: The gBMC dose engine makes it possible to use fast and accurate MC dose calculations for clinical work.

  6. TH-C-17A-08: Monte Carlo Based Design of Efficient Scintillating Fiber Dosimeters

    SciTech Connect

    Wiles, A; Loyalka, S; Rangaraj, D; Izaguirre, E

    2014-06-15

    Purpose: To accurately predict Cherenkov radiation generation in scintillating fiber dosimeters. Quantifying Cherenkov radiation provides a method for optimizing fiber dimensions, orientation, optical filters, and photodiode spectral sensitivity to achieve efficient real time imaging dosimeter designs. Methods: We develop in-house Monte Carlo simulation software to model polymer scintillation fibers' fluorescence and Cherenkov emission in megavoltage clinical beams. The model computes emissions using generation probabilities, wavelength sampling, fiber photon capture, and fiber transport efficiency and incorporates the fiber's index of refraction, optical attenuation in the Cherenkov and visible spectrum and fiber dimensions. Detector component selection based on parameters such as silicon photomultiplier efficiency and optical coupling filters separates Cherenkov radiation from the dose-proportional scintillating emissions. The computation uses spectral and geometrical separation of Cherenkov radiation, however other filtering techniques can expand the model. Results: We compute Cherenkov generation per electron and fiber capture and transmission of those photons toward the detector with incident electron beam angle dependence. The model accounts for beam obliquity and nonperpendicular electron fiber impingement, which increases Cherenkov emission and trapping. The rotational angle around square fibers shows trapping efficiency variation from the normally incident minimum to a maximum at 45 degrees rotation. For rotation in the plane formed by the fiber axis and its surface normal, trapping efficiency increases with angle from the normal. The Cherenkov spectrum follows the theoretical curve from 300nm to 800nm, the wavelength range of interest defined by silicon photomultiplier and photodiode spectral efficiency. Conclusion: We are able to compute Cherenkov generation in realistic real time scintillating fiber dosimeter geometries. Design parameters incorporate fiber dimensions, orientations, several types of detector spectral response, optical coupling filters and light transport. We can vary these parameters to design and optimize high efficiency real time dosimeters capable of enhancing external beam patient safety and treatment accuracy. This research was supported in part by a GAANN Fellowship from the Department of Education.

  7. Biexciton-mediated superradiant photon blockade

    E-print Network

    Poshakinskiy, Alexander V

    2015-01-01

    The photon blockade is a hallmark of quantum light transport through a single two-level system that can accomodate only one photon. Here, we theoretically show that two-photon transmission can be suppressed even for a seemingly classical system with large number of quantum dots in a cavity when the biexciton nonlinearity is taken into account. We reveal the nonmonotonous dependence of the second-order correlation function of the transmitted photons on the biexciton binding energy. The blockade is realized by proper tuning the biexciton resonance that controls the collective superradiant modes.

  8. Photon spectra from WIMP annihilation

    SciTech Connect

    Cembranos, J. A. R.; Cruz-Dombriz, A. de la; Dobado, A.; Maroto, A. L.; Lineros, R. A.

    2011-04-15

    If the present dark matter in the Universe annihilates into standard model particles, it must contribute to the fluxes of cosmic rays that are detected on the Earth and, in particular, to the observed gamma-ray fluxes. The magnitude of such a contribution depends on the particular dark matter candidate, but certain features of the produced photon spectra may be analyzed in a rather model-independent fashion. In this work we provide the complete photon spectra coming from WIMP annihilation into standard model particle-antiparticle pairs obtained by extensive Monte Carlo simulations. We present results for each individual annihilation channel and provide analytical fitting formulas for the different spectra for a wide range of WIMP masses.

  9. Specific absorbed fractions of energy from internal photon sources in brain tumor and cerebrospinal fluid

    SciTech Connect

    Evans, J.F. )); Stubbs, J.B. )

    1995-03-01

    Transferrin, radiolabeled with In-111, can be coinjected into glioblastoma multiforme lesions, and subsequent scintigraphic imaging can demonstrate the biokinetics of the cytotoxic transferrin. The administration of [sup 111]In transferrin into a brain tumor results in distribution of radioactivity in the brain, brain tumor, and the cerebrospinal fluid (CSF). Information about absorbed radiation doses to these regions, as well as other nearby tissues and organs, is important for evaluating radiation-related risks from this procedure. The radiation dose is usually estimated for a mathematical representation of the human body. We have included source/target regions for the eye, lens of the eye, spinal column, spinal CSF, cranial CSF, and a 100-g tumor within the brain of an adult male phantom developed by Cristy and Eckerman. The spinal column, spinal CSF, and the eyes have not been routinely included in photon transport simulations. Specific absorbed fractions (SAFs) as a function of photon energy were calculated using the ALGAMP computer code, which utilizes Monte Carlo techniques for simulating photon transport. The ALGAMP code was run three times, with the source activity distributed uniformly within the tumor, cranial CSF, and the spinal CSF volumes. These SAFs, which were generated for 12 discrete photon energies ranging from 0.01 to 4.0 MeV, were used with decay scheme data to calculate [ital S]-values needed for estimating absorbed doses. [ital S]-values for [sup 111]In are given for three source regions (brain tumor, cranial CSF, and spinal CSF) and all standard target regions/organs, the eye and lens, as well as to tissues within these source regions. [ital S]-values for the skeletal regions containing active marrow are estimated. These results are useful in evaluating the radiation doses from intracranial administration of [sup 111]In transferrin.

  10. Green photonics

    NASA Astrophysics Data System (ADS)

    Quan, Frederic

    2012-02-01

    Photonics, the broad merger of electronics with the optical sciences, encompasses such a wide swath of technology that its impact is almost universal in our everyday lives. This is a broad overview of some aspects of the industry and their contribution to the ‘green’ or environmental movement. The rationale for energy conservation is briefly discussed and the impact of photonics on our everyday lives and certain industries is described. Some opinions from industry are presented along with market estimates. References are provided to some of the most recent research in these areas.

  11. Photonic Bandgaps in Photonic Molecules

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Chang, Hongrok; Gates, Amanda L.; Fuller, Kirk A.; Gregory, Don A.; Witherow, William K.; Paley, Mark S.; Frazier, Donald O.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    This talk will focus on photonic bandgaps that arise due to nearly free photon and tight-binding effects in coupled microparticle and ring-resonator systems. The Mie formulation for homogeneous spheres is generalized to handle core/shell systems and multiple concentric layers in a manner that exploits an analogy with stratified planar systems, thereby allowing concentric multi-layered structures to be treated as photonic bandgap (PBG) materials. Representative results from a Mie code employing this analogy demonstrate that photonic bands arising from nearly free photon effects are easily observed in the backscattering, asymmetry parameter, and albedo for periodic quarter-wave concentric layers, though are not readily apparent in extinction spectra. Rather, the periodicity simply alters the scattering profile, enhancing the ratio of backscattering to forward scattering inside the bandgap, in direct analogy with planar quarter-wave multilayers. PBGs arising from tight-binding may also be observed when the layers (or rings) are designed such that the coupling between them is weak. We demonstrate that for a structure consisting of N coupled micro-resonators, the morphology dependent resonances split into N higher-Q modes, in direct analogy with other types of oscillators, and that this splitting ultimately results in PBGs which can lead to enhanced nonlinear optical effects.

  12. Radiation Transport Analysis in Chalcogenide-Based Devices and a Neutron Howitzer Using MCNP

    NASA Astrophysics Data System (ADS)

    Bowler, Herbert

    As photons, electrons, and neutrons traverse a medium, they impart their energy in ways that are analytically difficult to describe. Monte Carlo methods provide valuable insight into understanding this behavior, especially when the radiation source or environment is too complex to simplify. This research investigates simulating various radiation sources using the Monte Carlo N-Particle (MCNP) transport code, characterizing their impact on various materials, and comparing the simulation results to general theory and measurements. A total of five sources were of interest: two photon sources of different incident particle energies (3.83 eV and 1.25 MeV), two electron sources also of different energies (30 keV and 100 keV), and a californium-252 (Cf-252) spontaneous fission neutron source. Lateral and vertical programmable metallization cells (PMCs) were developed by other researchers for exposure to these photon and electron sources, so simplified PMC models were implemented in MCNP to estimate the doses and fluences. Dose rates measured around the neutron source and the predicted maximum activity of activation foils exposed to the neutrons were determined using MCNP and compared to experimental results obtained from gamma-ray spectroscopy. The analytical fluence calculations for the photon and electron cases agreed with MCNP results, and differences are due to MCNP considering particle movements that hand calculations do not. Doses for the photon cases agreed between the analytical and simulated results, while the electron cases differed by a factor of up to 4.8. Physical dose rate measurements taken from the neutron source agreed with MCNP within the 10% tolerance of the measurement device. The activity results had a percent error of up to 50%, which suggests a need to further evaluate the spectroscopy setup.

  13. Enhanced physics design with hexagonal repeated structure tools using Monte Carlo methods

    SciTech Connect

    Carter, L L; Lan, J S; Schwarz, R A

    1991-01-01

    This report discusses proposed new missions for the Fast Flux Test Facility (FFTF) reactor which involve the use of target assemblies containing local hydrogenous moderation within this otherwise fast reactor. Parametric physics design studies with Monte Carlo methods are routinely utilized to analyze the rapidly changing neutron spectrum. An extensive utilization of the hexagonal lattice within lattice capabilities of the Monte Carlo Neutron Photon (MCNP) continuous energy Monte Carlo computer code is applied here to solving such problems. Simpler examples that use the lattice capability to describe fuel pins within a brute force'' description of the hexagonal assemblies are also given.

  14. Monte Carlo calculations of PET coincidence timing: single and double-ended readout.

    PubMed

    Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W

    2015-09-21

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5?:?Ce (LSO), LaBr3?:?Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3?mm??×??3?mm??×??30?mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3?:?Ce crystals 3?mm??×??3?mm??×??30?mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3?mm??×??3?mm??×??30?mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound. PMID:26350162

  15. Monte Carlo calculations of PET coincidence timing: single and double-ended readout

    NASA Astrophysics Data System (ADS)

    Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.

    2015-09-01

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5?:?Ce (LSO), LaBr3?:?Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3?mm??×??3?mm??×??30?mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3?:?Ce crystals 3?mm??×??3?mm??×??30?mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3?mm??×??3?mm??×??30?mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound.

  16. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. PMID:26188622

  17. ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

    NASA Astrophysics Data System (ADS)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2014-06-01

    The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.

  18. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented ?-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum, neutron flux distribution. The validation of the measurements simulations with Mont-Carlo transport codes for the design, optimization and data analysis of further P&DGNAA facilities is performed in collaboration with LMN CEA Cadarache. The performance of the prompt gamma neutron activation analysis (PGNAA) for the nondestructive determination of actinides in small samples is investigated. The quantitative determination of actinides relies on the precise knowledge of partial neutron capture cross sections. Up to today these cross sections are not very accurate for analytical purpose. The goal of the TANDEM (Trans-uranium Actinides' Nuclear Data - Evaluation and Measurement) Collaboration is the evaluation of these cross sections. Cross sections are measured using prompt gamma activation analysis facilities in Budapest and Munich. Geant4 is used to optimally design the detection system with Compton suppression. Furthermore, for the evaluation of the cross sections it is strongly needed to correct the results to the self-attenuation of the prompt gammas within the sample. In the framework of cooperation RWTH Aachen University, Forschungszentrum Jülich and the Siemens AG will study the feasibility of a compact Neutron Imaging System for Radioactive waste Analysis (NISRA). The system is based on a 14 MeV neutron source and an advanced detector system (a-Si flat panel) linked to an exclusive converter/scintillator for fast neutrons. For shielding and radioprotection studies the codes MCNPX and Geant4 were used. The two codes were benchmarked in processing time and accuracy in the neutron and gamma fluxes. Also the detector response was simulated with Geant4 to optimize components of the system.

  19. EDITORIAL: International Workshop on Current Topics in Monte Carlo Treatment Planning

    NASA Astrophysics Data System (ADS)

    Verhaegen, Frank; Seuntjens, Jan

    2005-03-01

    The use of Monte Carlo particle transport simulations in radiotherapy was pioneered in the early nineteen-seventies, but it was not until the eighties that they gained recognition as an essential research tool for radiation dosimetry, health physics and later on for radiation therapy treatment planning. Since the mid-nineties, there has been a boom in the number of workers using MC techniques in radiotherapy, and the quantity of papers published on the subject. Research and applications of MC techniques in radiotherapy span a very wide range from fundamental studies of cross sections and development of particle transport algorithms, to clinical evaluation of treatment plans for a variety of radiotherapy modalities. The International Workshop on Current Topics in Monte Carlo Treatment Planning took place at Montreal General Hospital, which is part of McGill University, halfway up Mount Royal on Montreal Island. It was held from 3-5 May, 2004, right after the freezing winter has lost its grip on Canada. About 120 workers attended the Workshop, representing 18 countries. Most of the pioneers in the field were present but also a large group of young scientists. In a very full programme, 41 long papers were presented (of which 12 were invited) and 20 posters were on display during the whole meeting. The topics covered included the latest developments in MC algorithms, statistical issues, source modelling and MC treatment planning for photon, electron and proton treatments. The final day was entirely devoted to clinical implementation issues. Monte Carlo radiotherapy treatment planning has only now made a slow entrée in the clinical environment, taking considerably longer than envisaged ten years ago. Of the twenty-five papers in this dedicated special issue, about a quarter deal with this topic, with probably many more studies to follow in the near future. If anything, we hope the Workshop served as an accelerator for more clinical evaluation of MC applications. The remainder of the papers in this issue demonstrate that there is still plenty of work to be undertaken on other topics such as source modelling, calculation speed, data analysis, and development of user-friendly applications. We acknowledge the financial support of the National Cancer Institute of Canada, the Institute of Cancer Research of the Canadian Institutes of Health Research, the Research Grants Office and the Post Graduate Student Society of McGill University, and the Institute of Physics Publishing (IOPP). A final word of thanks goes out to all of those who contributed to the successful Workshop: our local medical physics students and staff, the many colleagues who acted as guest associate editors for the reviewing process, the IOPP staff, and the authors who generated new and exciting work.

  20. Validation of modulated electron radiotherapy delivered with photon multileaf collimation

    NASA Astrophysics Data System (ADS)

    Klein, Eric E.

    There is a challenge in radiotherapy to treat shallow targets due to the inability to provide dose heterogeneity while simultaneously minimizing dose to distal critical organs. There is a niche for Modulated Electron Radiotherapy (MERT) to complement a photon IMRT program. Disease sites such as post-mastectomy chest wall, and subcutaneous lymphoma of the scalp, etc. are better suited for modulated electrons rather than photons, or perhaps a combination. Inherent collimation systems are not conducive for electron beam delivery (in lieu of extended applicators), nor do commercial treatment planning systems model electrons collimated without applicators. The purpose of this study is to evaluate modulation of electrons by inherent photon multileaf collimators, and calculated and optimized by means of Monte Carlo. Modulated electron radiotherapy (MERT) evaluation was conducted with a Trilogy 120 leaf MLC for 6-20 MeV. To provide a sharp penumbra, modulated beams were delivered with short SSDs (70-85cm). Segment widths (SW) ranging from 1 to 10cm were configured for delivery and planning, using BEAMnrc MC code with 109 particles, and DOSXYZnrc calculations. Calculations were set with: voxel size 0.2 x 0.2 x 0.1cm3, and photon/electron transport energy cutoffs of 0.01 MeV/0.521 MeV. Dosimetry was performed with film and micro chambers. Calculated and measured data were analyzed in MatLab. Once validation of static fields was successfully completed, modulated portals (segmented and dynamic) were configured for treatment and calculations. Optimization for target coverage and OAR sparing was achieved by choosing energies according to target depth, and SW according to spatial coverage. Intensity for each segment was optimized by MC methods. Beam sharpness (penumbra) degraded with: decreasing energy and SW, and increasing SSD. PDD decreased significantly with decreasing SW. We have demonstrated excellent calculation/measurement agreement (<3mm). Equal dose profiles were achieved with delivery by static, segmental or dynamic MLC, except in the periphery and deep depth regions, which were lower with static delivery. With segmented delivery, we introduced small (˜1.5mm) gaps between segments to homogenize distributions at prescription depth. We achieved conformal coverage of treatment targets. The treatment time to deliver 5 segments of 3 energies was ˜90s, including console reprogramming. This study shows MERT as delivered with existing photon MLC is feasible, and provides conformal beam dose distributions.

  1. Enhanced Neoclassical Polarization: Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Molvig, Kim; Ernst, Darin; Hallatschek, Klaus

    2003-10-01

    The theoretical prediction of enhanced neoclassical polarization (K. Molvig, Yong Xiao, D. R. Ernst, K. Hallatschek, Sherwood Fusion Theory Conference, 2003.) in a tokamak plasma is investigated numerically using a Monte Carlo approach to combine the effects of collisions with guiding center tokamak orbits. The collisionless, kinematic contribution to the polarization first calculated by Rosenbluth and Hinton (M.N. Rosenbluth and F.L. Hinton, Phys. Rev. Lett. 80), 724 (1998). is reproduced from the orbits directly. A fifth order Runge-Kutta orbit integrator is used to give extremely high orbit accuracy. The cancellation of opposite trapped and circulating particle radial flows is verified explicitly in this simulation. The Monte Carlo representation of pitch angle scattering collisions (X.Q. Xu and M.N. Rosenbluth, Phys. Fluids B 3), 627 (1991) is used to compute the collisional processes. The numerical simulation determines the generalized Fokker-Planck coefficients used as the basis for transport in the Lagrangian formulation (I.B. Bernstein and K. Molvig, Phys. Fluids, 26), 1488 (1983). of transport theory. The computation generates the banana diffusion coefficient, < ? ? ^2/? t>, and the correlated cross process, < ? ? ? ? /? t>, responsible for the enhanced polarization. The numerical procedure generates smooth coefficients and resolves the analytic singularity that occurs at the trapped-circulating boundary.

  2. Photonic Crystal Optical Tweezers

    E-print Network

    Wilson, Benjamin K; Bachar, Stephanie; Knouf, Emily; Bendoraite, Ausra; Tewari, Muneesh; Pun, Suzie H; Lin, Lih Y

    2009-01-01

    Non-invasive optical manipulation of particles has emerged as a powerful and versatile tool for biological study and nanotechnology. In particular, trapping and rotation of cells, cell nuclei and sub-micron particles enables unique functionality for various applications such as tissue engineering, cancer research and nanofabrication. We propose and demonstrate a purely optical approach to rotate and align particles using the interaction of polarized light with photonic crystal nanostructures to generate enhanced trapping force. With a weakly focused laser beam we observed efficient trapping and transportation of polystyrene beads with sizes ranging from 10 um down to 190 nm as well as cancer cell nuclei. In addition, we demonstrated alignment of non-spherical particles using a 1-D photonic crystal structure. Bacterial cells were trapped, rotated and aligned with optical intensity as low as 17 uW/um^2. Finite-difference time domain (FDTD) simulations of the optical near-field and far-field above the photonic c...

  3. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  4. Contrast improvement by selecting ballistic-photons using polarization gating.

    PubMed

    Sormaz, Miloš; Jenny, Patrick

    2010-11-01

    In this paper a new approach to improve contrast in optical subsurface imaging is presented. The method is based on time-resolved reflectance and selection of ballistic photons using polarization gating. Numerical studies with a statistical Monte Carlo method also reveal that weakly scattered diffuse photons can be eliminated by employing a small aperture and that the contrast improvement strongly depends on the single-scattering phase function. A possible experimental setup is discussed in the conclusions. PMID:21164718

  5. Clinical utility of dopamine transporter single photon emission CT (DaT-SPECT) with (123I) ioflupane in diagnosis of parkinsonian syndromes

    PubMed Central

    Bajaj, Nin; Hauser, Robert A; Grachev, Igor D

    2013-01-01

    The diagnosis of movement disorders including Parkinson's disease (PD) and essential tremor is determined through clinical assessment. The difficulty with diagnosis of early PD has been highlighted in several recent clinical trials. Studies have suggested relatively high clinical diagnostic error rates for PD and essential tremor. This review was undertaken to clarify the utility of DaT-SPECT imaging with (123I)ioflupane (DaTSCAN or DaTscan or (123I)FP-CIT) in assisting practitioners in their clinical decision making by visualising the dopamine transporter in parkinsonian cases. In some patients with suspected parkinsonian syndromes, SPECT imaging with (123I)ioflupane is useful to assist in the diagnosis and to help guide prognosis and treatment decisions, including avoiding medications that are unlikely to provide benefit. Clinicians ordering (123I)ioflupane SPECT should be aware of its limitations and pitfalls and should order scans when there is diagnostic uncertainty or when the scan will be helpful in clinical decision making. PMID:23486993

  6. Microalgae photonics

    NASA Astrophysics Data System (ADS)

    Floume, Timmy; Coquil, Thomas; Sylvestre, Julien

    2011-05-01

    Due to their metabolic flexibility and fast growth rate, microscopic aquatic phototrophs like algae have a potential to become industrial photochemical converters. Algae photosynthesis could enable the large scale production of clean and renewable liquid fuels and chemicals with major environmental, economic and societal benefits. Capital and operational costs are the main issues to address through optical, process and biochemical engineering improvements. In this perspective, a variety of photonic approaches have been proposed - we introduce them here and describe their potential, limitations and compatibility with separate biotechnology and engineering progresses. We show that only sunlight-based approaches are economically realistic. One of photonics' main goals in the algae field is to dilute light to overcome photosaturation effects that impact upon cultures exposed to full sunlight. Among other approaches, we introduce a widely-compatible broadband spectral adaptation technique called AlgoSun® that uses luminescence to optimize sunlight spectrum in view of the bioconverter's requirements.

  7. Patient-Specific Monte Carlo Dose Calculations for High-Dose-Rate Endorectal Brachytherapy With Shielded Intracavitary Applicator

    SciTech Connect

    Poon, Emily; Williamson, Jeffrey F.; Te Vuong; Verhaegen, Frank

    2008-11-15

    Purpose: An integrated software platform was developed to perform a patient-specific dosimetric study on high-dose-rate {sup 192}Ir endorectal brachytherapy. Monte Carlo techniques were used to examine the perturbation effects of an eight-channel intracavitary applicator with shielding and a liquid-inflatable balloon. Such effects are ignored in conventional treatment planning systems that assume water-equivalent geometries. Methods and Materials: A total of 40 Task Group 43-based rectal patient plans were calculated using the PTRAN{sub C}T Monte Carlo photon transport code. The silicone applicator, tungsten or lead shielding, contrast solution-filled balloon, and patient anatomy were included in the simulations. The dose to water and dose to medium were scored separately. The effects of heterogeneities and uncertainties in source positioning were examined. A superposition calculation method using pregenerated Monte Carlo dose distributions about the shielded applicator in water was developed and validated for efficient treatment planning purposes. Results: On average, metal shielding decreases the mean dose to the contralateral normal tissues by 24% and reduces the target volume covered by the prescribed dose from 97% to 94%. Tissue heterogeneities contribute to dose differences of <1% relative to the prescribed dose. The differences in the dose volume indices between dose to water and dose to medium-based calculations were <1% for soft tissues, <2% for bone marrow, and >20% for cortical bone. A longitudinal shift of {+-}2.5 mm and a rotational shift of {+-}15{sup o} in applicator insertion reduced the target volume receiving the prescribed dose by {<=}4%. Conclusion: The shielded applicator improved dose conformity and normal tissue sparing; however, Task Group 43-based treatment planning might compromise target coverage by not accounting for shielding.

  8. Verification of Monte Carlo Calculations by Means of Neutron and Gamma Fluence Spectra Measurements behind and inside of Iron-Water Configurations

    SciTech Connect

    Boehmer, Bertram; Konheiser, Joerg; Noack, Klaus; Rogov, Anatoli; Grantz, Martin; Mehner, Hans-Christoph; Hinke, Dietmar; Unholzer, Siegfried

    2005-05-24

    Neutron and gamma spectra were measured behind and inside of modules consisting of variable iron and water slabs that were installed in radial beams of the zero-power training and research reactors AKR of the Technical University Dresden and ZLFR of the University of Applied Sciences Zittau/Goerlitz. The applied NE-213 scintillation spectrometer did allow the measurement of gamma and neutron fluence spectra in the energy regions 0.3-10 MeV for photons and 1.0-20 MeV for neutrons. The paper describes the experiments and presents important results of the measurements. They are compared with the results of Monte Carlo transport calculations made by means of the codes MCNP and TRAMO on an absolute scale of fluences.

  9. Monte Carlo Simulations of Subsurface Analysis of Painted Layers in Micro-Scale Spatially Offset Raman Spectroscopy.

    PubMed

    Matousek, Pavel; Conti, Claudia; Colombo, Chiara; Realini, Marco

    2015-09-01

    A recently developed micrometer-scale spatially offset Raman spectroscopy (micro-SORS) method provides a new analytical capability for investigating nondestructively the chemical composition of subsurface, micrometer-scale-thick, diffusely scattering layers at depths beyond the reach of conventional confocal Raman microscopy. Here we provide, for the first time, the theoretical foundations for the micro-SORS defocusing concept based on Monte Carlo simulations. Specifically, we investigate a defocusing variant of micro-SORS that we used in our recent proof-of-concept study in conditions involving thin, diffusely scattering layers on top of an extended, diffusely scattering substrate. This configuration is pertinent, for example, for the subsurface analysis of painted layers in cultural heritage studies. The depth of the origin of Raman signal and the relative micro-SORS enhancement of the sublayer signals reached are studied as a function of layer thickness, sample photon transport length, and absorption. The model predicts that sublayer enhancement initially rapidly increases with increasing defocusing, ultimately reaching a plateau. The magnitude of the enhancement was found to be larger for thicker layers. The simulations also indicate that the penetration depths of micro-SORS can be between one and two orders of magnitude larger than those reached using conventional confocal Raman microscopy. The model provides a deeper insight into the underlying Raman photon migration mechanisms permitting the more effective optimization of experimental conditions for specific sample parameters. PMID:26253393

  10. Using lattice tools and unfolding methods for hpge detector efficiency simulation with the Monte Carlo code MCNP5

    NASA Astrophysics Data System (ADS)

    Querol, A.; Gallardo, S.; Ródenas, J.; Verdú, G.

    2015-11-01

    In environmental radioactivity measurements, High Purity Germanium (HPGe) detectors are commonly used due to their excellent resolution. Efficiency calibration of detectors is essential to determine activity of radionuclides. The Monte Carlo method has been proved to be a powerful tool to complement efficiency calculations. In aged detectors, efficiency is partially deteriorated due to the dead layer increasing and consequently, the active volume decreasing. The characterization of the radiation transport in the dead layer is essential for a realistic HPGe simulation. In this work, the MCNP5 code is used to calculate the detector efficiency. The F4MESH tally is used to determine the photon and electron fluence in the dead layer and the active volume. The energy deposited in the Ge has been analyzed using the *F8 tally. The F8 tally is used to obtain spectra and to calculate the detector efficiency. When the photon fluence and the energy deposition in the crystal are known, some unfolding methods can be used to estimate the activity of a given source. In this way, the efficiency is obtained and serves to verify the value obtained by other methods.

  11. Monte Carlo Simulation of Characteristic Secondary Fluorescence in Electron Probe Microanalysis of Homogeneous Samples Using the Splitting Technique.

    PubMed

    Petaccia, Mauricio; Segui, Silvina; Castellano, Gustavo

    2015-06-01

    Electron probe microanalysis (EPMA) is based on the comparison of characteristic intensities induced by monoenergetic electrons. When the electron beam ionizes inner atomic shells and these ionizations cause the emission of characteristic X-rays, secondary fluorescence can occur, originating from ionizations induced by X-ray photons produced by the primary electron interactions. As detectors are unable to distinguish the origin of these characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, characteristic secondary fluorescence enhancement in EPMA has been studied by using the splitting routines offered by PENELOPE 2008 as a variance reduction alternative. This approach is controlled by a single parameter NSPLIT, which represents the desired number of X-ray photon replicas. The dependence of the uncertainties associated with secondary intensities on NSPLIT was studied as a function of the accelerating voltage and the sample composition in a simple binary alloy in which this effect becomes relevant. The achieved efficiencies for the simulated secondary intensities bear a remarkable improvement when increasing the NSPLIT parameter; although in most cases an NSPLIT value of 100 is sufficient, some less likely enhancements may require stronger splitting in order to increase the efficiency associated with the simulation of secondary intensities. PMID:25980545

  12. Monte Carlo simulation of a collimation system for low-energy beamline of ELI-NP Gamma Beam System

    NASA Astrophysics Data System (ADS)

    Cardarelli, P.; Gambaccini, M.; Marziani, M.; Bagli, E.; Petrillo, V.; Bacci, A.; Curatolo, C.; Drebot, I.; Vaccarezza, C.

    2015-07-01

    ELI-nuclear physics (NP) Gamma Beam System (GBS) is an intense and monochromatic gamma beam source based on inverse Compton interaction, currently being built in Bucharest, Romania. The gamma beam produced, with energy ranging from 0.2 to 20 MeV, energy bandwidth 0.5% and flux of about 108photons/s, will be devoted to investigate a broad range of applications such as nuclear physics, astrophysics, material science and life sciences. The radiation produced by an inverse Compton interaction is not intrinsically monochromatic. In fact, the energy of the photons produced is related to the emission angle, therefore the energy bandwidth can be modified adjusting the collimation of the gamma beam. In order to define the optimal layout and evaluate the performance of a collimation system for the ELI-NP-GBS low-energy beamline (0.2-3.5 MeV), a detailed Monte Carlo simulation activity has been carried out. The simulation, using Geant4 and MCNPX codes, included the transport of the gamma beam from the interaction point to the experimental area passing through vacuum pipes, vacuum chambers, collimation system and relative shielding. The effectiveness of the collimation system, in obtaining the required energy distribution and avoiding the contamination due to secondary radiation production, was evaluated. Also, the background radiation generated by collimation and the shielding layout have been studied.

  13. Validation of columnar CsI x-ray detector responses obtained with hybridMANTIS, a CPU-GPU Monte Carlo code for coupled x-ray, electron, and optical transport

    SciTech Connect

    Sharma, Diksha; Badano, Aldo

    2013-03-15

    Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.

  14. Measuring photon-photon interactions via photon detection

    E-print Network

    Mihai A. Macovei

    2010-06-18

    The strong non-linearity plays a significant role in physics, particularly, in designing novel quantum sources of light and matter as well as in quantum chemistry or quantum biology. In simple systems, the photon-photon interaction can be determined analytically. However, it becomes challenging to obtain it for more compex systems. Therefore, we show here how to measure strong non-linearities via allowing the sample to interact with a weakly pumped quantized leaking optical mode. We found that the detected mean-photon number versus pump-field frequency shows several peaks. Interestingly, the interval between neighbour peaks equals the photon-photon interaction potential. Furthermore, the system exhibits sub-Poissonian photon statistics, entanglement and photon switching with less than one photon. Finally, we connect our study with existing related experiments.

  15. Measuring photon-photon interactions via photon detection

    E-print Network

    Macovei, Mihai A

    2010-01-01

    The strong non-linearity plays a significant role in physics, particularly, in designing novel quantum sources of light and matter as well as in quantum chemistry or quantum biology. In simple systems, the photon-photon interaction can be determined analytically. However, it becomes challenging to obtain it for more compex systems. Therefore, we show here how to measure strong non-linearities via allowing the sample to interact with a weakly pumped quantized leaking optical mode. We found that the detected mean-photon number versus pump-field frequency shows several peaks. Interestingly, the interval between neighbour peaks equals the photon-photon interaction potential. Furthermore, the system exhibits sub-Poissonian photon statistics, entanglement and photon switching with less than one photon. Finally, we connect our study with existing related experiments.

  16. 3D deterministic radiation transport for dose computations in clinical procedures

    NASA Astrophysics Data System (ADS)

    Al-Basheer, Ahmad

    The main goal of this dissertation was to establish the feasibility of basing megavoltage external photon beam absorbed dose calculations in voxelized phantoms on SN deterministic calculations and pre-calculated electron absorbed dose kernels derived from full-physics Monte Carlo. The SN derived electron absorbed dose kernel method EDK-SN, developed as part of this research, achieves total execution times that are on the order of several times to orders of magnitude faster than conventional full-physics Monte Carlo electron transport methods considering equivalently detailed models and data fidelity. With the rapid movement toward intensity modulated radiation therapy (IMRT), radiation beam intensities have increased dramatically over the past decade, thus heightening the need for further characterization of out-of-field organ absorbed doses, along with their associated biological risks. Assessment of these tissue absorbed doses is complicated by two fundamental limitations. First, anatomic information on the patient is generally restricted to a partial body CT image acquired for treatment planning; consequently, whole-body computational phantoms must be employed to provide the out-of-field anatomy model structure for absorbed dose evaluation. Second, existing methods based on Monte Carlo radiation transport, even with the application significant variance reduction, are quite computationally inefficient at large distances from the primary beam, and point-kernel methods do not properly handle tissue inhomogeneities. Moreover, since absorbed dose are generally tracked in all major organs in the body, variance reduction schemes for Monte Carlo are not all effective in this regard. The outcome of this dissertation is to demonstrate that absorbed dose from high-energy external beams radiation can be accurately computed for whole body and organ-specific absorbed doses. The EDK-SN method implements voxelized phantoms with discrete ordinates (SN) transport computations coupled with directional influences and (pre-computed) full-physics Monte Carlo based electron absorbed dose kernels to yield total body absorbed dose information. This research shows that the deterministic techniques coupled with Monte Carlo based electron absorbed dose-kernels has significant potential for organ absorbed dose evaluation in the clinical management of radiation therapy patients.

  17. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  18. TH-E-BRE-02: A Forward Scattering Approximation to Dose Calculation Using the Linear Boltzmann Transport Equation

    SciTech Connect

    Catt, B; Snyder, M

    2014-06-15

    Purpose: To investigate the use of the linear Boltzmann transport equation as a dose calculation tool which can account for interface effects, while still having faster computation times than Monte Carlo methods. In particular, we introduce a forward scattering approximation, in hopes of improving calculation time without a significant hindrance to accuracy. Methods: Two coupled Boltzmann transport equations were constructed, one representing the fluence of photons within the medium, and the other, the fluence of electrons. We neglect the scattering term within the electron transport equation, resulting in an extreme forward scattering approximation to reduce computational complexity. These equations were then solved using a numerical technique for solving partial differential equations, known as a finite difference scheme, where the fluence at each discrete point in space is calculated based on the fluence at the previous point in the particle's path. Using this scheme, it is possible to develop a solution to the Boltzmann transport equations by beginning with boundary conditions and iterating across the entire medium. The fluence of electrons can then be used to find the dose at any point within the medium. Results: Comparisons with Monte Carlo simulations indicate that even simplistic techniques for solving the linear Boltzmann transport equation yield expected interface effects, which many popular dose calculation algorithms are not capable of predicting. Implementation of a forward scattering approximation does not appear to drastically reduce the accuracy of this algorithm. Conclusion: Optimized implementations of this algorithm have been shown to be very accurate when compared with Monte Carlo simulations, even in build up regions where many models fail. Use of a forward scattering approximation could potentially give a reasonably accurate dose distribution in a shorter amount of time for situations where a completely accurate dose distribution is not required, such as in certain optimization algorithms.

  19. Photon calorimeter

    DOEpatents

    Chow, Tze-Show

    1988-04-22

    A photon calorimeter is provided that comprises a laminar substrate that is uniform in density and homogeneous in atomic composition. A plasma-sprayed coating, that is generally uniform in density and homogeneous in atomic composition within the proximity of planes that are parallel to the surfaces of the substrate, is applied to either one or both sides of the laminar substrate. The plasma-sprayed coatings may be very efficiently spectrally tailored in atomic number. Thermocouple measuring junctions, are positioned within the plasma-sprayed coatings. The calorimeter is rugged, inexpensive, and equilibrates in temperature very rapidly. 4 figs.

  20. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  1. Monte Carlo dosimetric study of the medium dose rate CSM40 source.

    PubMed

    Vijande, J; Granero, D; Perez-Calatayud, J; Ballester, F

    2013-12-01

    The (137)Cs medium dose rate (MDR) CSM40 source model (Eckert & Ziegler BEBIG, Germany) is in clinical use but no dosimetric dataset has been published. This study aims to obtain dosimetric data for the CSM40 source for its use in clinical practice as required by the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO). Penelope2008 and Geant4 Monte Carlo codes were used to characterize this source dosimetrically. It was located in an unbounded water phantom with composition and mass density as recommended by AAPM and ESTRO. Due to the low photon energies of (137)Cs, absorbed dose was approximated by collisional kerma. Additional simulations were performed to obtain the air-kerma strength, sK. Mass-energy absorption coefficients in water and air were consistently derived and used to calculate collisional kerma. Results performed with both radiation transport codes showed agreement typically within 0.05%. Dose rate constant, radial dose function and anisotropy function are provided for the CSM40 and compared with published data for other commercially available (137)Cs sources. An uncertainty analysis has been performed. The data provided by this study can be used as input data and verification in the treatment planning systems. PMID:24121444

  2. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  3. Toward Real-Time Monte Carlo Simulation Using a Commercial Cloud Computing Infrastructure+

    PubMed Central

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. Methods We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the Message Passing Interface (MPI), and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. Results The output of the cloud-based MC simulation is identical to that produced by the single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 hour on a local computer can be executed in 3.3 minutes on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Conclusion Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. PMID:21841211

  4. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. PMID:21841211

  5. Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data

    SciTech Connect

    Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André

    2014-04-15

    Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions: The data of this work allow for the accurate calculation of structural shielding thickness, taking into account the spectral variation with shield thickness, and broad beam conditions, in a realistic geometry. The simplicity of calculations also obviates the need for the use of crude transmission data estimates such as the half and tenth value layer indices. Although this study was primarily designed for brachytherapy, results might also be useful for radiology and nuclear medicine facility design, provided broad beam conditions apply.

  6. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  7. Photon SAF calculation based on the Chinese mathematical phantom and comparison with the ORNL phantoms.

    PubMed

    Qiu, Rui; Li, Junli; Zhang, Zhan; Wu, Zhen; Zeng, Zhi; Fan, Jiajin

    2008-12-01

    The Chinese mathematical phantom (CMP) is a stylized human body model developed based on the methods of Oak Ridge National Laboratory (ORNL) mathematical phantom series (OMPS), and data from Reference Asian Man and Chinese Reference Man. It is constructed for radiation dose estimation for Mongolians, whose anatomical parameters are different from those of Caucasians to some extent. Specific absorbed fractions (SAF) are useful quantities for the primary estimation of internal radiation dose. In this paper, a general Monte Carlo code, Monte Carlo N-Particle Code (MCNP) is used to transport particles and calculate SAF. A new variance reduction technique, called the "pointing probability with force collision" method, is implemented into MCNP to reduce the calculation uncertainty, especially for a small-volume target organ. Finally, SAF data for all 31 organs of both sexes of CMP are calculated. A comparison between SAF based on male phantoms of CMP and OMPS demonstrates that the differences apparently exist, and more than 80% of SAF data based on CMP are larger than that of OMPS. However, the differences are acceptable (the differences are above one order of magnitude only in less than 3% of situations) considering the differences in physique. Furthermore, trends in the SAF with increasing photon energy based on the two phantoms agree well. This model complements existing phantoms of different age, sex and ethnicity. PMID:19001898

  8. Markov Chain Monte Carlo Usher's Algorithm

    E-print Network

    Bremen, Universität

    Concepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 04.11.2014 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics 1 Concepts 2 Markov Chain Monte Carlo Basics Example Metropolis and Simulated Annealing 3 Usher

  9. Tevatron direct photon results.

    SciTech Connect

    Kuhlmann, S.

    1999-09-21

    Tevatron direct photon results since DIS98 are reviewed. Two new CDF measurements are discussed, the Run Ib inclusive photon cross section and the photon + Muon cross section. Comparisons with the latest NLO QCD calculations are presented.

  10. Considerations of beta and electron transport in internal dose calculations. Progress report

    SciTech Connect

    Bolch, W.E.

    1994-11-01

    The goal of this particular task is to consider, for the first time, the explicit transport of beta particles and photon-generated electrons in the series of six phantoms developed by Cristy and Eckerman (1987) at the Oak Ridge National Laboratory. In their report, ORNL/TM-8381, specific absorbed fractions of energy are reported for phantoms representing the newborn (3.4 kg), the one-year-old (9.8 kg), the five-year-old (19 kg), the ten-year-old (32 kg), the fifteen-year-old/adult female (55-58 kg), and the adult male (70 kg). Radiation transport calculations were performed with the Monte Carlo code ALGAMP which allows photon transport only. In subsequent calculations of radionuclide S values as is done in the MIRDOSE2 computer program, electron absorbed fractions are thus considered to be either unity or zero depending upon whether the source region does or does not equal the target region, respectively.

  11. On the relationship between carrier mobility and velocity in sub-50 mm MOSFETs via calibrated Monte Carlo simulation

    E-print Network

    Nayfeh, Osama Munir, 1980-

    2004-01-01

    Subsequent to accurate 2D inverse modeling in the regime sensitive to electrostatics of industrial sub-50 nm NMOSFETs, a 2D full-band Monte Carlo device simulator was calibrated in the regime sensitive to transport parameters. ...

  12. The photon gas formulation of thermal radiation

    NASA Technical Reports Server (NTRS)

    Ried, R. C., Jr.

    1975-01-01

    A statistical consideration of the energy, the linear momentum, and the angular momentum of the photons that make up a thermal radiation field was presented. A general nonequilibrium statistical thermodynamics approach toward a macroscopic description of thermal radiation transport was developed and then applied to the restricted equilibrium statistical thermostatics derivation of the energy, linear momentum, and intrinsic angular momentum equations for an isotropic photon gas. A brief treatment of a nonisotropic photon gas, as an example of the results produced by the nonequilibrium statistical thermodynamics approach, was given. The relativistic variation of temperature and the invariance of entropy were illustrated.

  13. Carlos Chagas: biographical sketch.

    PubMed

    Moncayo, Alvaro

    2010-01-01

    Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world recognition and a deserved high place in medical history. After the publication of his classic article the world paid homage to Chagas who was elected member of the National Academy of Medicine of Brazil on 26 October 1910, and at the age of 31, of other National Academies of the continent. The Committee of Hygiene of the Society of Nations, precursor of the World Health Organization, was created in 1929. Chagas was elected member of this Committee from its inception until 1933. The example of Chagas' life can be summarized in his interest that medical research should be translated into concrete benefits for human beings because he was convinced that disease had not only biological but social determinants as well. Carlos Chagas was a laboratory researcher, a clinician and a health administrator. For all these accomplishments he deserves our respect and admiration. PMID:19895782

  14. QED multiphoton corrections to Bhabha scattering at low angels. Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Richter-Ws, E.; Ward, B. F. L.; Ws, Z.

    1991-10-01

    We calculate the QED-corrected integrated cross-section for small-angle Bhabha scattering with kinematical cuts very close to the real experimental cuts at the LEP/SLC experiments - no distinction between electrons and photons is made. We effectively compare up to four independent Monte Carlo calculations and one semi-analytical calculation. The aim of the exercise is to establish precision of the O (?) Monte Carlo calculation with the exclusive exponentiation of the Yennie-Frautschi-Suura type. It provides the integrated cross-section for the low-angle Bhabha scattering process (?<10°), necessary for luminosity measurement, with an overall precision of 0.25%. The corresponding computer program BHLUMI 2.00 is in the form of a stand-alone Monte Carlo event generator. The complete and explicit definition of the multiphoton matrix element is given. Examples of numerical results from Monte Carlo phase space integration are demonstrated and discussed.

  15. Development of Numerical Models for Performance Predictions of Single-Photon Avalanche Photodetectors (SPAP) for the 2-Micron Regime

    NASA Technical Reports Server (NTRS)

    Joshi, Ravindra P.; Abedin, M. Nurul (Technical Monitor)

    2001-01-01

    Field dependent drift velocity results are presented for electron transport in bulk Indium Arsenide (InAs) material based on a Monte Carlo model, which includes an analytical treatment of band-to-band impact ionization. Avalanche multiplication and related excess noise factor (F) are computed as a function of device length and applied voltage. A decrease in F with increases in device length is obtained. The results suggest an inherent utility for InAs-based single-photon avalanche detectors, particularly around the 2 microns region of interest for atmospheric remote sensing applications. The dark current response was also evaluated. The role of the various components has been analyzed. For shorter devices, the tunneling component is shown to dominate at low temperatures. Finally, possible structures for enhanced photodetection are proposed for future research.

  16. Resonance formation in photon-photon collisions

    SciTech Connect

    Gidal, G.

    1988-08-01

    Recent experimental progress on resonance formation in photon-photon collisions is reviewed with particular emphasis on the pseudoscalar and tensor nonents and on the ..gamma gamma..* production of spin-one resonances. 37 refs., 17 figs., 5 tabs.

  17. Overview of Particle and Heavy Ion Transport Code System PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  18. Graphical User Interface for Simplified Neutron Transport Calculations

    SciTech Connect

    Schwarz, Randolph; Carter, Leland L

    2011-07-18

    A number of codes perform simple photon physics calculations. The nuclear industry is lacking in similar tools to perform simplified neutron physics shielding calculations. With the increased importance of performing neutron calculations for homeland security applications and defense nuclear nonproliferation tasks, having an efficient method for performing simple neutron transport calculations becomes increasingly important. Codes such as Monte Carlo N-particle (MCNP) can perform the transport calculations; however, the technical details in setting up, running, and interpreting the required simulations are quite complex and typically go beyond the abilities of most users who need a simple answer to a neutron transport calculation. The work documented in this report resulted in the development of the NucWiz program, which can create an MCNP input file for a set of simple geometries, source, and detector configurations. The user selects source, shield, and tally configurations from a set of pre-defined lists, and the software creates a complete MCNP input file that can be optionally run and the results viewed inside NucWiz.

  19. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    SciTech Connect

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  20. Coupled Electron/photon S(n) Calculation in Lattice Geometry.

    NASA Astrophysics Data System (ADS)

    Hadad, Kamal

    The capabilities of the coupled charged/neutral particle transport S_{rm N} code SMARTEPANTS (Simulating Many Accumulative Rutherford Trajectories Electron Photon and Neutral Transport Solver) have been extended from x-y-z geometry to x-y-z geometry with embedded cylinders. A new method called the super-cell algorithm was applied to accommodate cylindrical shapes using a rectangular mesh. The super-cell is defined as a rectangular mesh cell containing one or more material interfaces. Each material region within a super-cell constitutes a sub-cell. To model cylindrical shapes, curved sub-cell interfaces were used. The critical aspect of the super-cell method was to determine the angular fluxes in a sub-cell within the super-cell. To do this, the super-cells were divided into two major categories, Type-1 and Type-2. The cylinder's radius compared to the super-cell's mesh size was used as a basis to distinguish the super-cell's type. Each type was divided into several sub-cases depending on the direction cosine of the angular flux when entering the super-cell. The super-cell method was integrated into SMARTEPANTS and then used to calculate the energy deposition for a variety of test problems to check the method's sensitivity to its parameters. For a block of galium-arsenide (Ga -As) with an embedded gold cylinder, it was found that an S_8 quadrature set with a five energy groups is both time efficient and yields satisfactory results. The effect of cylinder radius compared to the mesh size in Type-2 super-cells was found to be minimum for an optimum mesh size. Several benchmark problems were performed to compare the super-cells results with coupled electron/photon Monte Carlo code (ITS). The total energy deposition in the peak energy cell was selected to facilitate the comparison. Peak energy cell is the cell with the maximum energy deposition. For an isotropic electron source in a Ga-As block embedded with Type-1 and Type-2 gold cylinders the results were within 3% and 6% respectively and SMARTEPANTS results in the non-super-cells were more symmetric than Monte Carlo. Super -cell also demonstrated better computer efficiency both in CPU time and memory when compared with the Monte Carlo method on the same machine.