Sample records for photon transport code

  1. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  2. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  3. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  4. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less

  5. PBMC: Pre-conditioned Backward Monte Carlo code for radiative transport in planetary atmospheres

    NASA Astrophysics Data System (ADS)

    García Muñoz, A.; Mills, F. P.

    2017-08-01

    PBMC (Pre-Conditioned Backward Monte Carlo) solves the vector Radiative Transport Equation (vRTE) and can be applied to planetary atmospheres irradiated from above. The code builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. In accounting for the polarization in the sampling of photon propagation directions and pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions, PBMC avoids the unstable and biased solutions of classical BMC algorithms for conservative, optically-thick, strongly-polarizing media such as Rayleigh atmospheres.

  6. CEPXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    CEPXS is a multigroup-Legendre cross-section generating code. The cross sections produced by CEPXS enable coupled electron-photon transport calculations to be performed with multigroup radiation transport codes, e.g. MITS and SCEPTRE. CEPXS generates multigroup-Legendre cross sections for photons, electrons and positrons over the energy range from 100 MeV to 1.0 keV. The continuous slowing-down approximation is used for those electron interactions that result in small-energy losses. The extended transport correction is applied to the forward-peaked elastic scattering cross section for electrons. A standard multigroup-Legendre treatment is used for the other coupled electron-photon cross sections. CEPXS extracts electron cross-section information from themore » DATAPAC data set and photon cross-section information from Biggs-Lighthill data. The model that is used for ionization/relaxation in CEPXS is essentially the same as that employed in ITS.« less

  7. Continuous Energy Photon Transport Implementation in MCATK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed

    2016-10-31

    The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.

  8. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a methodmore » for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  9. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  10. Comparison of Calculations and Measurements of the Off-Axis Radiation Dose (SI) in Liquid Nitrogen as a Function of Radiation Length.

    DTIC Science & Technology

    1984-12-01

    radiation lengths. The off-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured using thermal luminescent...various path lengths out to 2 radiation lengths. The cff-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured... using thermal luminescent dosimeters (TLD’s). Calculations were performed on a CDC-7600 computer at Los Alamos National Laboratory and measurements

  11. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  12. DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations

    NASA Astrophysics Data System (ADS)

    Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.

    2000-08-01

    A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.

  13. A new Monte Carlo code for light transport in biological tissue.

    PubMed

    Torres-García, Eugenio; Oros-Pantoja, Rigoberto; Aranda-Lara, Liliana; Vieyra-Reyes, Patricia

    2018-04-01

    The aim of this work was to develop an event-by-event Monte Carlo code for light transport (called MCLTmx) to identify and quantify ballistic, diffuse, and absorbed photons, as well as their interaction coordinates inside the biological tissue. The mean free path length was computed between two interactions for scattering or absorption processes, and if necessary scatter angles were calculated, until the photon disappeared or went out of region of interest. A three-layer array (air-tissue-air) was used, forming a semi-infinite sandwich. The light source was placed at (0,0,0), emitting towards (0,0,1). The input data were: refractive indices, target thickness (0.02, 0.05, 0.1, 0.5, and 1 cm), number of particle histories, and λ from which the code calculated: anisotropy, scattering, and absorption coefficients. Validation presents differences less than 0.1% compared with that reported in the literature. The MCLTmx code discriminates between ballistic and diffuse photons, and inside of biological tissue, it calculates: specular reflection, diffuse reflection, ballistics transmission, diffuse transmission and absorption, and all parameters dependent on wavelength and thickness. The MCLTmx code can be useful for light transport inside any medium by changing the parameters that describe the new medium: anisotropy, dispersion and attenuation coefficients, and refractive indices for specific wavelength.

  14. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  15. A Monte Carlo method using octree structure in photon and electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogawa, K.; Maeda, S.

    Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less

  16. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  17. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  18. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  19. Photon migration in non-scattering tissue and the effects on image reconstruction

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.; Arridge, S. R.

    1999-12-01

    Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.

  20. Nuclear Resonance Fluorescence for Materials Assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quiter, Brian; Ludewigt, Bernhard; Mozin, Vladimir

    This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX's photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less

  1. Benchmark test of transport calculations of gold and nickel activation with implications for neutron kerma at Hiroshima.

    PubMed

    Hoshi, M; Hiraoka, M; Hayakawa, N; Sawada, S; Munaka, M; Kuramoto, A; Oka, T; Iwatani, K; Shizuma, K; Hasai, H

    1992-11-01

    A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a 252Cf fission neutron source to validate the use of the code for the energy spectrum analyses of Hiroshima atomic bomb neutrons. Nuclear data libraries used in the Monte Carlo neutron and photon transport code calculation were ENDF/B-III, ENDF/B-IV, LASL-SUB, and ENDL-73. The neutron moderators used were granite (the main component of which is SiO2, with a small fraction of hydrogen), Newlight [polyethylene with 3.7% boron (natural)], ammonium chloride (NH4Cl), and water (H2O). Each moderator was 65 cm thick. The neutron detectors were gold and nickel foils, which were used to detect thermal and epithermal neutrons (4.9 eV) and fast neutrons (> 0.5 MeV), respectively. Measured activity data from neutron-irradiated gold and nickel foils in these moderators decreased to about 1/1,000th or 1/10,000th, which correspond to about 1,500 m ground distance from the hypocenter in Hiroshima. For both gold and nickel detectors, the measured activities and the calculated values agreed within 10%. The slopes of the depth-yield relations in each moderator, except granite, were similar for neutrons detected by the gold and nickel foils. From the results of these studies, the Monte Carlo neutron and photon transport code was verified to be accurate enough for use with the elements hydrogen, carbon, nitrogen, oxygen, silicon, chlorine, and cadmium, and for the incident 252Cf fission spectrum neutrons.

  2. An Electron/Photon/Relaxation Data Library for MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, III, H. Grady

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  3. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  4. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors

    PubMed Central

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

  5. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    PubMed

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.

  6. Nuclear Resonance Fluorescence for Materials Assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quiter, Brian J.; Ludewigt, Bernhard; Mozin, Vladimir

    This paper discusses the use of nuclear resonance fluorescence (NRF) techniques for the isotopic and quantitative assaying of radioactive material. Potential applications include age-dating of an unknown radioactive source, pre- and post-detonation nuclear forensics, and safeguards for nuclear fuel cycles Examples of age-dating a strong radioactive source and assaying a spent fuel pin are discussed. The modeling work has ben performed with the Monte Carlo radiation transport computer code MCNPX, and the capability to simulate NRF has bee added to the code. Discussed are the limitations in MCNPX?s photon transport physics for accurately describing photon scattering processes that are importantmore » contributions to the background and impact the applicability of the NRF assay technique.« less

  7. Correlated Production and Analog Transport of Fission Neutrons and Photons using Fission Models FREYA, FIFRELIN and the Monte Carlo Code TRIPOLI-4® .

    NASA Astrophysics Data System (ADS)

    Verbeke, Jérôme M.; Petit, Odile; Chebboubi, Abdelhazize; Litaize, Olivier

    2018-01-01

    Fission modeling in general-purpose Monte Carlo transport codes often relies on average nuclear data provided by international evaluation libraries. As such, only average fission multiplicities are available and correlations between fission neutrons and photons are missing. Whereas uncorrelated fission physics is usually sufficient for standard reactor core and radiation shielding calculations, correlated fission secondaries are required for specialized nuclear instrumentation and detector modeling. For coincidence counting detector optimization for instance, precise simulation of fission neutrons and photons that remain correlated in time from birth to detection is essential. New developments were recently integrated into the Monte Carlo transport code TRIPOLI-4 to model fission physics more precisely, the purpose being to access event-by-event fission events from two different fission models: FREYA and FIFRELIN. TRIPOLI-4 simulations can now be performed, either by connecting via an API to the LLNL fission library including FREYA, or by reading external fission event data files produced by FIFRELIN beforehand. These new capabilities enable us to easily compare results from Monte Carlo transport calculations using the two fission models in a nuclear instrumentation application. In the first part of this paper, broad underlying principles of the two fission models are recalled. We then present experimental measurements of neutron angular correlations for 252Cf(sf) and 240Pu(sf). The correlations were measured for several neutron kinetic energy thresholds. In the latter part of the paper, simulation results are compared to experimental data. Spontaneous fissions in 252Cf and 240Pu are modeled by FREYA or FIFRELIN. Emitted neutrons and photons are subsequently transported to an array of scintillators by TRIPOLI-4 in analog mode to preserve their correlations. Angular correlations between fission neutrons obtained independently from these TRIPOLI-4 simulations, using either FREYA or FIFRELIN, are compared to experimental results. For 240Pu(sf), the measured correlations were used to tune the model parameters.

  8. Brachytherapy dosimetry of 125I and 103Pd sources using an updated cross section library for the MCNP Monte Carlo transport code.

    PubMed

    Bohm, Tim D; DeLuca, Paul M; DeWerd, Larry A

    2003-04-01

    Permanent implantation of low energy (20-40 keV) photon emitting radioactive seeds to treat prostate cancer is an important treatment option for patients. In order to produce accurate implant brachytherapy treatment plans, the dosimetry of a single source must be well characterized. Monte Carlo based transport calculations can be used for source characterization, but must have up to date cross section libraries to produce accurate dosimetry results. This work benchmarks the MCNP code and its photon cross section library for low energy photon brachytherapy applications. In particular, we calculate the emitted photon spectrum, air kerma, depth dose in water, and radial dose function for both 125I and 103Pd based seeds and compare to other published results. Our results show that MCNP's cross section library differs from recent data primarily in the photoelectric cross section for low energies and low atomic number materials. In water, differences as large as 10% in the photoelectric cross section and 6% in the total cross section occur at 125I and 103Pd photon energies. This leads to differences in the dose rate constant of 3% and 5%, and differences as large as 18% and 20% in the radial dose function for the 125I and 103Pd based seeds, respectively. Using a partially updated photon library, calculations of the dose rate constant and radial dose function agree with other published results. Further, the use of the updated photon library allows us to verify air kerma and depth dose in water calculations performed using MCNP's perturbation feature to simulate updated cross sections. We conclude that in order to most effectively use MCNP for low energy photon brachytherapy applications, we must update its cross section library. Following this update, the MCNP code system will be a very effective tool for low energy photon brachytherapy dosimetry applications.

  9. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  10. ASC Weekly News Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womble, David E.

    Unified collision operator demonstrated for both radiation transport and PIC-DSMC. A side-by-side comparison between the DSMC method and the radiation transport method was conducted for photon attenuation in the atmosphere over 2 kilometers in physical distance with a reduction of photon density of six orders of magnitude. Both DSMC and traditional radiation transport agreed with theory to two digits. This indicates that PIC-DSMC operators can be unified with the radiation transport collision operators into a single code base and that physics kernels can remain unique to the actual collision pairs. This simulation example provides an initial validation of the unifiedmore » collision theory approach that will later be implemented into EMPIRE.« less

  11. Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.

    1993-07-01

    The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less

  12. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  13. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less

  14. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  15. Stochastic analog neutron transport with TRIPOLI-4 and FREYA: Bayesian uncertainty quantification for neutron multiplicity counting

    DOE PAGES

    Verbeke, J. M.; Petit, O.

    2016-06-01

    From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less

  16. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  17. Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Sheikh-Bagheri, D.

    A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).

  18. MCNP6.1 simulations for low-energy atomic relaxation: Code-to-code comparison with GATEv7.2, PENELOPE2014, and EGSnrc

    NASA Astrophysics Data System (ADS)

    Jung, Seongmoon; Sung, Wonmo; Lee, Jaegi; Ye, Sung-Joon

    2018-01-01

    Emerging radiological applications of gold nanoparticles demand low-energy electron/photon transport calculations including details of an atomic relaxation process. Recently, MCNP® version 6.1 (MCNP6.1) has been released with extended cross-sections for low-energy electron/photon, subshell photoelectric cross-sections, and more detailed atomic relaxation data than the previous versions. With this new feature, the atomic relaxation process of MCNP6.1 has not been fully tested yet with its new physics library (eprdata12) that is based on the Evaluated Atomic Data Library (EADL). In this study, MCNP6.1 was compared with GATEv7.2, PENELOPE2014, and EGSnrc that have been often used to simulate low-energy atomic relaxation processes. The simulations were performed to acquire both photon and electron spectra produced by interactions of 15 keV electrons or photons with a 10-nm-thick gold nano-slab. The photon-induced fluorescence X-rays from MCNP6.1 fairly agreed with those from GATEv7.2 and PENELOPE2014, while the electron-induced fluorescence X-rays of the four codes showed more or less discrepancies. A coincidence was observed in the photon-induced Auger electrons simulated by MCNP6.1 and GATEv7.2. A recent release of MCNP6.1 with eprdata12 can be used to simulate the photon-induced atomic relaxation.

  19. MCNP/X TRANSPORT IN THE TABULAR REGIME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HUGHES, H. GRADY

    2007-01-08

    The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

  20. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  1. Transport of Neutrons and Photons Through Iron and Water Layers

    NASA Astrophysics Data System (ADS)

    Košťál, Michal; Cvachovec, František; Ošmera, Bohumil; Hansen, Wolfgang; Noack, Klaus

    2009-08-01

    The neutron and photon spectra were measured after iron and water plates placed at the horizontal channel of the Dresden University reactor AK-2. The measurements have been performed with the multiparameter spectrometer [1] with a stilbene cylindrical crystal, 10 × 10 mm or 45 × 45 mm; the neutron and photon spectra have been measured simultaneously. The calculations were performed with the MCNP code and nuclear data libraries ENDF/B VI.2, ENDF/BVII.0, JENDL 3.3 and JEFF 3.1. The measured channel leakage spectrum was used as the input spectrum for the transport calculation. Photons, the primary photons from the reactor - as well as the ones induced by neutron interaction - were calculated. The comparison of the measurements and calculations through 10 cm of iron and 20 cm thickness of water are presented. Besides that, the attenuation of the radiation mixed field by iron layers from 5 to 30 cm is presented; the measured and calculated data are compared.

  2. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  3. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  4. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  5. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  6. Modification of codes NUALGAM and BREMRAD, Volume 1

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Huang, R.; Firstenberg, H.

    1971-01-01

    The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.

  7. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator.

    PubMed

    Puchalska, Monika; Sihver, Lembit

    2015-06-21

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.

  8. PHITS simulations of absorbed dose out-of-field and neutron energy spectra for ELEKTA SL25 medical linear accelerator

    NASA Astrophysics Data System (ADS)

    Puchalska, Monika; Sihver, Lembit

    2015-06-01

    Monte Carlo (MC) based calculation methods for modeling photon and particle transport, have several potential applications in radiotherapy. An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. It is also essential to minimize the dose to radiosensitive and critical organs. With MC technique, the dose distributions from both the primary and scattered photons can be calculated. The out-of-field radiation doses are of particular concern when high energy photons are used, since then neutrons are produced both in the accelerator head and inside the patients. Using MC technique, the created photons and particles can be followed and the transport and energy deposition in all the tissues of the patient can be estimated. This is of great importance during pediatric treatments when minimizing the risk for normal healthy tissue, e.g. secondary cancer. The purpose of this work was to evaluate 3D general purpose PHITS MC code efficiency as an alternative approach for photon beam specification. In this study, we developed a model of an ELEKTA SL25 accelerator and used the transport code PHITS for calculating the total absorbed dose and the neutron energy spectra infield and outside the treatment field. This model was validated against measurements performed with bubble detector spectrometers and Boner sphere for 18 MV linacs, including both photons and neutrons. The average absolute difference between the calculated and measured absorbed dose for the out-of-field region was around 11%. Taking into account a simplification for simulated geometry, which does not include any potential scattering materials around, the obtained result is very satisfactorily. A good agreement between the simulated and measured neutron energy spectra was observed while comparing to data found in the literature.

  9. Secondary bremsstrahlung and the energy-conservation aspects of kerma in photon-irradiated media.

    PubMed

    Kumar, Sudhir; Nahum, Alan E

    2016-02-07

    Kerma, collision kerma and absorbed dose in media irradiated by megavoltage photons are analysed with respect to energy conservation. The user-code DOSRZnrc was employed to compute absorbed dose D, kerma K and a special form of kerma, K ncpt, obtained by setting the charged-particle transport energy cut-off very high, thereby preventing the generation of 'secondary bremsstrahlung' along the charged-particle paths. The user-code FLURZnrc was employed to compute photon fluence, differential in energy, from which collision kerma, K col and K were derived. The ratios K/D, K ncpt/D and K col/D have thereby been determined over a very large volumes of water, aluminium and copper irradiated by broad, parallel beams of 0.1 to 25 MeV monoenergetic photons, and 6, 10 and 15 MV 'clinical' radiotherapy qualities. Concerning depth-dependence, the 'area under the kerma, K, curve' exceeded that under the dose curve, demonstrating that kerma does not conserve energy when computed over a large volume. This is due to the 'double counting' of the energy of the secondary bremsstrahlung photons, this energy being (implicitly) included in the kerma 'liberated' in the irradiated medium, at the same time as this secondary bremsstrahlung is included in the photon fluence which gives rise to kerma elsewhere in the medium. For 25 MeV photons this 'violation' amounts to 8.6%, 14.2% and 25.5% in large volumes of water, aluminium and copper respectively but only 0.6% for a 'clinical' 6 MV beam in water. By contrast, K col/D and K ncpt/D, also computed over very large phantoms of the same three media, for the same beam qualities, are equal to unity within (very low) statistical uncertainties, demonstrating that collision kerma and the special type of kerma, K ncpt, do conserve energy over a large volume. A comparison of photon fluence spectra for the 25 MeV beam at a depth of  ≈51 g cm−2 for both very high and very low charged-particle transport cut-offs reveals the considerable contribution to the total photon fluence by secondary bremsstrahlung in the latter case. Finally, a correction to the 'kerma integral' has been formulated to account for the energy transferred to charged particles by photons with initial energies below the Monte-Carlo photon transport cut-off PCUT; for 25 MeV photons this 'photon track end' correction is negligible for all PCUT below 10 keV.

  10. Advances in Monte-Carlo code TRIPOLI-4®'s treatment of the electromagnetic cascade

    NASA Astrophysics Data System (ADS)

    Mancusi, Davide; Bonin, Alice; Hugot, François-Xavier; Malouch, Fadhel

    2018-01-01

    TRIPOLI-4® is a Monte-Carlo particle-transport code developed at CEA-Saclay (France) that is employed in the domains of nuclear-reactor physics, criticality-safety, shielding/radiation protection and nuclear instrumentation. The goal of this paper is to report on current developments, validation and verification made in TRIPOLI-4 in the electron/positron/photon sector. The new capabilities and improvements concern refinements to the electron transport algorithm, the introduction of a charge-deposition score, the new thick-target bremsstrahlung option, the upgrade of the bremsstrahlung model and the improvement of electron angular straggling at low energy. The importance of each of the developments above is illustrated by comparisons with calculations performed with other codes and with experimental data.

  11. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    PubMed

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  12. Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.

    PubMed

    Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W

    1998-05-01

    The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.

  13. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  14. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Morgan C.

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a selectmore » group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.« less

  15. Importance biasing scheme implemented in the PRIZMA code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  16. Nuclear radiation analysis

    NASA Technical Reports Server (NTRS)

    Knies, R. J.; Byrn, N. R.; Smith, H. T.

    1972-01-01

    A study program of radiation shielding against the deleterious effects of nuclear radiation on man and equipment is reported. The methods used to analyze the radiation environment from bremsstrahlung photons are discussed along with the methods employed by transport code users. The theory and numerical methods used to solve transport of neutrons and gammas are described, and the neutron and cosmic fluxes that would be present on the gamma-ray telescope were analyzed.

  17. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  18. Efficient simulation of voxelized phantom in GATE with embedded SimSET multiple photon history generator.

    PubMed

    Lin, Hsin-Hon; Chuang, Keh-Shih; Lin, Yi-Hsing; Ni, Yu-Ching; Wu, Jay; Jan, Meei-Ling

    2014-10-21

    GEANT4 Application for Tomographic Emission (GATE) is a powerful Monte Carlo simulator that combines the advantages of the general-purpose GEANT4 simulation code and the specific software tool implementations dedicated to emission tomography. However, the detailed physical modelling of GEANT4 is highly computationally demanding, especially when tracking particles through voxelized phantoms. To circumvent the relatively slow simulation of voxelized phantoms in GATE, another efficient Monte Carlo code can be used to simulate photon interactions and transport inside a voxelized phantom. The simulation system for emission tomography (SimSET), a dedicated Monte Carlo code for PET/SPECT systems, is well-known for its efficiency in simulation of voxel-based objects. An efficient Monte Carlo workflow integrating GATE and SimSET for simulating pinhole SPECT has been proposed to improve voxelized phantom simulation. Although the workflow achieves a desirable increase in speed, it sacrifices the ability to simulate decaying radioactive sources such as non-pure positron emitters or multiple emission isotopes with complex decay schemes and lacks the modelling of time-dependent processes due to the inherent limitations of the SimSET photon history generator (PHG). Moreover, a large volume of disk storage is needed to store the huge temporal photon history file produced by SimSET that must be transported to GATE. In this work, we developed a multiple photon emission history generator (MPHG) based on SimSET/PHG to support a majority of the medically important positron emitters. We incorporated the new generator codes inside GATE to improve the simulation efficiency of voxelized phantoms in GATE, while eliminating the need for the temporal photon history file. The validation of this new code based on a MicroPET R4 system was conducted for (124)I and (18)F with mouse-like and rat-like phantoms. Comparison of GATE/MPHG with GATE/GEANT4 indicated there is a slight difference in energy spectra for energy below 50 keV due to the lack of x-ray simulation from (124)I decay in the new code. The spatial resolution, scatter fraction and count rate performance are in good agreement between the two codes. For the case studies of (18)F-NaF ((124)I-IAZG) using MOBY phantom with 1  ×  1 × 1 mm(3) voxel sizes, the results show that GATE/MPHG can achieve acceleration factors of approximately 3.1 × (4.5 ×), 6.5 × (10.7 ×) and 9.5 × (31.0 ×) compared with GATE using the regular navigation method, the compressed voxel method and the parameterized tracking technique, respectively. In conclusion, the implementation of MPHG in GATE allows for improved efficiency of voxelized phantom simulations and is suitable for studying clinical and preclinical imaging.

  19. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  20. Monte Carlo dose calculations in homogeneous media and at interfaces: a comparison between GEPTS, EGSnrc, MCNP, and measurements.

    PubMed

    Chibani, Omar; Li, X Allen

    2002-05-01

    Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.

  1. SAM-CE; A Three Dimensional Monte Carlo Code for the Dolution of the Forward Neutron and Forward and Adjoint Gamma Ray Transport Equations. Revision C

    DTIC Science & Technology

    1974-07-31

    Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other sccres of interest, e.g., collision... Multiplicities ...... . . . . 43 2,3.5.2 Photon Production Cross Sections. . 44 2.3.5.3 Anisotropy of Photon Production . . 44 2.3.5.4 Continuous...hepting, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either

  2. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  3. Modelling of an Orthovoltage X-ray Therapy Unit with the EGSnrc Monte Carlo Package

    NASA Astrophysics Data System (ADS)

    Knöös, Tommy; Rosenschöld, Per Munck Af; Wieslander, Elinore

    2007-06-01

    Simulations with the EGSnrc code package of an orthovoltage x-ray machine have been performed. The BEAMnrc code was used to transport electrons, produce x-ray photons in the target and transport of these through the treatment machine down to the exit level of the applicator. Further transport in water or CT based phantoms was facilitated by the DOSXYZnrc code. Phase space files were scored with BEAMnrc and analysed regarding the energy spectra at the end of the applicator. Tuning of simulation parameters was based on the half-value layer quantity for the beams in either Al or Cu. Calculated depth dose and profile curves have been compared against measurements and show good agreement except at shallow depths. The MC model tested in this study can be used for various dosimetric studies as well as generating a library of typical treatment cases that can serve as both educational material and guidance in the clinical practice

  4. Modeling of photon migration in the human lung using a finite volume solver

    NASA Astrophysics Data System (ADS)

    Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.

    2006-02-01

    The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).

  5. MCNP4A: Features and philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.

    This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ``Quality, Value and New Features.`` Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, newmore » photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs.« less

  6. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  7. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  8. Meson Production and Space Radiation

    NASA Astrophysics Data System (ADS)

    Norbury, John; Blattnig, Steve; Norman, Ryan; Aghara, Sukesh

    Protecting astronauts from the harmful effects of space radiation is an important priority for long duration space flight. The National Council on Radiation Protection (NCRP) has recently recommended that pion and other mesons should be included in space radiation transport codes, especially in connection with the Martian atmosphere. In an interesting accident of nature, the galactic cosmic ray spectrum has its peak intensity near the pion production threshold. The Boltzmann transport equation is structured in such a way that particle production cross sec-tions are multiplied by particle flux. Therefore, the peak of the incident flux of the galactic cosmic ray spectrum is more important than other regions of the spectrum and cross sections near the peak are enhanced. This happens with pion cross sections. The MCNPX Monte-Carlo transport code now has the capability of transporting heavy ions, and by using a galactic cosmic ray spectrum as input, recent work has shown that pions contribute about twenty percent of the dose from galactic cosmic rays behind a shield of 20 g/cm2 aluminum and 30 g/cm2 water. It is therefore important to include pion and other hadron production in transport codes designed for space radiation studies, such as HZETRN. The status of experimental hadron production data for energies relevant to space radiation will be reviewed, as well as the predictive capa-bilities of current theoretical hadron production cross section and space radiation transport models. Charged pions decay into muons and neutrinos, and neutral pions decay into photons. An electromagnetic cascade is produced as these particles build up in a material. The cascade and transport of pions, muons, electrons and photons will be discussed as they relate to space radiation. The importance of other hadrons, such as kaons, eta mesons and antiprotons will be considered as well. Efficient methods for calculating cross sections for meson production in nucleon-nucleon and nucleus-nucleus reactions will be presented. The NCRP has also recom-mended that more attention should be paid to neutron and light ion transport. The coupling of neutrons, light ions, mesons and other hadrons will be discussed.

  9. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.

  10. Theory and methods for measuring the effective multiplication constant in ADS

    NASA Astrophysics Data System (ADS)

    Rugama Saez, Yolanda

    2001-10-01

    In the thesis an absolute measurements technique for the subcriticality determination is presented. The ADS is a hybrid system where a subcritical system is fed by a proton accelerator. There are different proposals to define an ADS, one is to use plutonium and minor actinides from power plants waste as fuel to be transmuted into non radioactive isotopes (transmuter/burner, ATW). Another proposal is to use a Th232-U233 cycle (Energy Amplifier), being that thorium is an interesting and abundant fertile isotope. The development of accelerator driven systems (ADS) requires the development of methods to monitor and control the subcriticality of this kind of system without interfering with its normal operation mode. With this finality, we have applied noise analysis techniques that allow us to characterise the system when it is operating. The method presented in this thesis is based on the stochastic neutron and photon transport theory that can be implemented by presently available neutron/photon transport codes. In this work, first we analyse the stochastic transport theory which has been applied to define a parameter to determine the subcritical reactivity monitoring measurements. Finally we give the main limitations and recommendations for these subcritical monitoring methodology. As a result of the theoretical methodology, done in the first part of this thesis, a monitoring measurement technique has been developed and verified using two coupled Monte Carlo programs. The first one, LAHET, simulates the spallation collisions and the high energy transport and the other, MCNP-DSP, is used to estimate the counting statistics from a neutron/photon ray counter in a fissile system, as well as the transport for neutron with energies less than 20 MeV. From the coupling of both codes we developed the LAHET/MCNP-DSP code which, has the capability to simulate the total process in the ADS from the proton interaction to the signal detector processing. In these simulations, we compute the cross power spectral densities between pairs of detectors located inside the system which, is defined as the measured parameter. From the comparison of the theoretical predictions with the Monte Carlo simulations, we obtain some practical and simple methods to determine the system multiplication constant. (Abstract shortened by UMI.)

  11. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  12. The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems

    DOE R&D Accomplishments Database

    Nelson, W. R.; Namito, Yoshihito

    1990-03-01

    In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.

  13. Optimizing modelling in iterative image reconstruction for preclinical pinhole PET

    NASA Astrophysics Data System (ADS)

    Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.

    2016-05-01

    The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.

  14. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  15. Signal pulse emulation for scintillation detectors using Geant4 Monte Carlo with light tracking simulation.

    PubMed

    Ogawara, R; Ishikawa, M

    2016-07-01

    The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr3:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposed technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.

  16. Overview of Particle and Heavy Ion Transport Code System PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  17. Fuego/Scefire MPMD Coupling L2 Milestone Executive Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Flint; Tencer, John; Pautz, Shawn D.

    2017-09-01

    This milestone campaign was focused on coupling Sandia physics codes SIERRA low Mach module Fuego and RAMSES Boltzmann transport code Sceptre(Scefire). Fuego enables simulation of low Mach, turbulent, reacting, particle laden flows on unstructured meshes using CVFEM for abnormal thermal environments throughout SNL and the larger national security community. Sceptre provides simulation for photon, neutron, and charged particle transport on unstructured meshes using Discontinuous Galerkin for radiation effects calculations at SNL and elsewhere. Coupling these ”best of breed” codes enables efficient modeling of thermal/fluid environments with radiation transport, including fires (pool, propellant, composite) as well as those with directed radiantmore » fluxes. We seek to improve the experience of Fuego users who require radiation transport capabilities in two ways. The first is performance. We achieve this through leveraging additional computational resources for Scefire, reducing calculation times while leaving unaffected resources for fluid physics. This approach is new to Fuego, which previously utilized the same resources for both fluid and radiation solutions. The second improvement enables new radiation capabilities, including spectral (banded) radiation, beam boundary sources, and alternate radiation solvers (i.e. Pn). This summary provides an overview of these achievements.« less

  18. Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.

    2008-01-01

    Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.

  19. DXRaySMCS: a user-friendly interface developed for prediction of diagnostic radiology X-ray spectra produced by Monte Carlo (MCNP-4C) simulation.

    PubMed

    Bahreyni Toossi, M T; Moradi, H; Zare, H

    2008-01-01

    In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.

  20. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.

    PubMed

    Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  1. Validation of radiative transfer computation with Monte Carlo method for ultra-relativistic background flow

    NASA Astrophysics Data System (ADS)

    Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi

    2017-11-01

    We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.

  2. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  3. Organ doses from radionuclides on the ground. Part I. Simple time dependences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, P.; Paretzke, H.G.; Rosenbaum, H.

    1988-06-01

    Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less

  4. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  5. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  6. Skyshine line-beam response functions for 20- to 100-MeV photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.

  7. Calculation of the effective dose from natural radioactivity in soil using MCNP code.

    PubMed

    Krstic, D; Nikezic, D

    2010-01-01

    Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this work. Calculations have been done for the most common natural radionuclides in soil (238)U, (232)Th series and (40)K. A ORNL human phantoms and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs. The effective dose was calculated according to ICRP 74 recommendations. Conversion factors of effective dose per air kerma were determined. Results obtained here were compared with other authors. Copyright 2009 Elsevier Ltd. All rights reserved.

  8. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    NASA Astrophysics Data System (ADS)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  9. GUI to Facilitate Research on Biological Damage from Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Frances A.; Ponomarev, Artem Lvovich

    2010-01-01

    A graphical-user-interface (GUI) computer program has been developed to facilitate research on the damage caused by highly energetic particles and photons impinging on living organisms. The program brings together, into one computational workspace, computer codes that have been developed over the years, plus codes that will be developed during the foreseeable future, to address diverse aspects of radiation damage. These include codes that implement radiation-track models, codes for biophysical models of breakage of deoxyribonucleic acid (DNA) by radiation, pattern-recognition programs for extracting quantitative information from biological assays, and image-processing programs that aid visualization of DNA breaks. The radiation-track models are based on transport models of interactions of radiation with matter and solution of the Boltzmann transport equation by use of both theoretical and numerical models. The biophysical models of breakage of DNA by radiation include biopolymer coarse-grained and atomistic models of DNA, stochastic- process models of deposition of energy, and Markov-based probabilistic models of placement of double-strand breaks in DNA. The program is designed for use in the NT, 95, 98, 2000, ME, and XP variants of the Windows operating system.

  10. Signal pulse emulation for scintillation detectors using Geant4 Monte Carlo with light tracking simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogawara, R.; Ishikawa, M., E-mail: masayori@med.hokudai.ac.jp

    The anode pulse of a photomultiplier tube (PMT) coupled with a scintillator is used for pulse shape discrimination (PSD) analysis. We have developed a novel emulation technique for the PMT anode pulse based on optical photon transport and a PMT response function. The photon transport was calculated using Geant4 Monte Carlo code and the response function with a BC408 organic scintillator. The obtained percentage RMS value of the difference between the measured and simulated pulse with suitable scintillation properties using GSO:Ce (0.4, 1.0, 1.5 mol%), LaBr{sub 3}:Ce and BGO scintillators were 2.41%, 2.58%, 2.16%, 2.01%, and 3.32%, respectively. The proposedmore » technique demonstrates high reproducibility of the measured pulse and can be applied to simulation studies of various radiation measurements.« less

  11. SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T. III

    SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.

  12. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  13. MAX meets ADAM: a dosimetric comparison between a voxel-based and a mathematical model for external exposure to photons

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Vieira, J. W.; Khoury, H. J.; Lima, F. de Andrade

    2004-03-01

    The International Commission on Radiological Protection intends to revise the organ and tissue equivalent dose conversion coefficients published in various reports. For this purpose the mathematical human medical internal radiation dose (MIRD) phantoms, actually in use, have to be replaced by recently developed voxel-based phantoms. This study investigates the dosimetric consequences, especially with respect to the effective male dose, if not only a MIRD phantom is replaced by a voxel phantom, but also if the tissue compositions and the radiation transport codes are changed. This task will be resolved by systematically replacing in the mathematical ADAM/GSF exposure model, first the radiation transport code, then the tissue composition and finally the phantom anatomy, in order to arrive at the voxel-based MAX/EGS4 exposure model. The results show that the combined effect of these replacements can decrease the effective male dose by up to 25% for external exposures to photons for incident energies above 30 keV for different field geometries, mainly because of increased shielding by a heterogeneous skeleton and by the overlying adipose and muscle tissue, and also because of the positions internal organs have in a realistically designed human body compared to their positions in the mathematically constructed phantom.

  14. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils

    NASA Astrophysics Data System (ADS)

    Dixon, David A.; Hughes, H. Grady

    2017-09-01

    This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied.

  15. Effective dose rate coefficients for exposure to contaminated soil

    DOE PAGES

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.; ...

    2017-05-10

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  16. Effective dose rate coefficients for exposure to contaminated soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  17. National photonics skills standards for technicians

    NASA Astrophysics Data System (ADS)

    Hull, Darrell M.

    1995-10-01

    Photonics is defined as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of phonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's communication systems, from the laser that generates the digital information transported along a fiber- optic cable to the detector that decodes the information. Whether the transmitted information is a phone call from across the street or across the globe, photonics brings it to you. Where your health is concerned, photonics allows physicians to do minimally invasive surgery using fiber-optic endoscopes and lasers. Researches using spectroscopy and microscopy are pushing the frontiers of biotechnology in activities as widespread as diagnosing disease and probing the mysteries of the genetic code. Advanced sensing and imaging techniques monitor the environment, gathering data on crops and forests, analyzing the ocean's currents and contents, and probing the atmosphere of pollutants. Transportation needs are being impacted by photonic sensors and laser rangefinders that will soon monitor and control the traffic on our nation's highways. In our factories, photonics provides machine vision systems that give a level of quality control human inspectors could never achieve. In manufacturing, lasers are replacing a variety of cutting, welding, and marking techniques, while imaging systems teamed with neural networks are producing intelligent robots. In short, photonics is paving our way into the new millennium. The skill standard is intended to define the knowledge and capabilities - the skills - that workers in the phonics industry need. Phonics will be one of the primary battlefields of the world economic conflict, and it is imperative that U.S. photonics technicians be skilled enough to allow the United States to remain competitive in a global marketplace. The focus of this standard is on the skills necessary for employment as a phonics technician and is not intended to be an analysis of those skills that are important for workers in all occupational areas. A comprehensive treatment of the skills necessary for all workers has been the subject of a number of studies, most notably, the work of the Secretary's Commission on the Achievement of Necessary Skills (SCANS). It is our hope at CORD that the work presented in the standard lends more detail and rational for the accomplishment of the broader skills that should be obtained by all students.

  18. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  19. Radiation shielding quality assurance

    NASA Astrophysics Data System (ADS)

    Um, Dallsun

    For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.

  20. Modeling the Production of Beta-Delayed Gamma Rays for the Detection of Special Nuclear Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, J M; Pruet, J A; Brown, D A

    2005-02-14

    The objective of this LDRD project was to develop one or more models for the production of {beta}-delayed {gamma} rays following neutron-induced fission of a special nuclear material (SNM) and to define a standardized formatting scheme which will allow them to be incorporated into some of the modern, general-purpose Monte Carlo transport codes currently being used to simulate inspection techniques proposed for detecting fissionable material hidden in sea-going cargo containers. In this report, we will describe a Monte Carlo model for {beta}-delayed {gamma}-ray emission following the fission of SNM that can accommodate arbitrary time-dependent fission rates and photon collection histories.more » The model involves direct sampling of the independent fission yield distributions of the system, the branching ratios for decay of individual fission products and spectral distributions representing photon emission from each fission product and for each decay mode. While computationally intensive, it will be shown that this model can provide reasonably detailed estimates of the spectra that would be recorded by an arbitrary spectrometer and may prove quite useful in assessing the quality of evaluated data libraries and identifying gaps in the libraries. The accuracy of the model will be illustrated by comparing calculated and experimental spectra from the decay of short-lived fission products following the reactions {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f). For general-purpose transport calculations, where a detailed consideration of the large number of individual {gamma}-ray transitions in a spectrum may not be necessary, it will be shown that a simple parameterization of the {gamma}-ray source function can be defined which provides high-quality average spectral distributions that should suffice for calculations describing photons being transported through thick attenuating media. Finally, a proposal for ENDF-compatible formats that describe each of the models and allow for their straightforward use in Monte Carlo codes will be presented.« less

  1. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    PubMed

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  2. Development and validation of a GEANT4 radiation transport code for CT dosimetry

    PubMed Central

    Carver, DE; Kost, SD; Fernald, MJ; Lewis, KG; Fraser, ND; Pickens, DR; Price, RR; Stabin, MG

    2014-01-01

    We have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate our simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air, a standard 16-cm acrylic head phantom, and a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of our Monte Carlo simulations. We found that simulated and measured CTDI values were within an overall average of 6% of each other. PMID:25706135

  3. Development and validation of a GEANT4 radiation transport code for CT dosimetry.

    PubMed

    Carver, D E; Kost, S D; Fernald, M J; Lewis, K G; Fraser, N D; Pickens, D R; Price, R R; Stabin, M G

    2015-04-01

    The authors have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate their simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air with a standard 16-cm acrylic head phantom and with a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of the Monte Carlo simulations. It was found that simulated and measured CTDI values were within an overall average of 6% of each other.

  4. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less

  5. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS

    NASA Astrophysics Data System (ADS)

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.

    2017-06-01

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  6. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E

    2017-06-21

    A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  7. Calculation of absorbed dose and biological effectiveness from photonuclear reactions in a bremsstrahlung beam of end point 50 MeV.

    PubMed

    Gudowska, I; Brahme, A; Andreo, P; Gudowski, W; Kierkegaard, J

    1999-09-01

    The absorbed dose due to photonuclear reactions in soft tissue, lung, breast, adipose tissue and cortical bone has been evaluated for a scanned bremsstrahlung beam of end point 50 MeV from a racetrack accelerator. The Monte Carlo code MCNP4B was used to determine the photon source spectrum from the bremsstrahlung target and to simulate the transport of photons through the treatment head and the patient. Photonuclear particle production in tissue was calculated numerically using the energy distributions of photons derived from the Monte Carlo simulations. The transport of photoneutrons in the patient and the photoneutron absorbed dose to tissue were determined using MCNP4B; the absorbed dose due to charged photonuclear particles was calculated numerically assuming total energy absorption in tissue voxels of 1 cm3. The photonuclear absorbed dose to soft tissue, lung, breast and adipose tissue is about (0.11-0.12)+/-0.05% of the maximum photon dose at a depth of 5.5 cm. The absorbed dose to cortical bone is about 45% larger than that to soft tissue. If the contributions from all photoparticles (n, p, 3He and 4He particles and recoils of the residual nuclei) produced in the soft tissue and the accelerator, and from positron radiation and gammas due to induced radioactivity and excited states of the nuclei, are taken into account the total photonuclear absorbed dose delivered to soft tissue is about 0.15+/-0.08% of the maximum photon dose. It has been estimated that the RBE of the photon beam of 50 MV acceleration potential is approximately 2% higher than that of conventional 60Co radiation.

  8. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  9. SU-F-T-376: The Efficiency of Calculating Photonuclear Reaction On High-Energy Photon Therapy by Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujibuchi, T

    Purpose: Secondary-neutrons having harmful influences to a human body are generated by photonuclear reaction on high-energy photon therapy. Their characteristics are not known in detail since the calculation to evaluate them takes very long time. PHITS(Particle and Heavy Ion Transport code System) Monte Carlo code since versions 2.80 has the new parameter “pnimul” raising the probability of occurring photonuclear reaction forcibly to make the efficiency of calculation. We investigated the optimum value of “pnimul” on high-energy photon therapy. Methods: The geometry of accelerator head based on the specification of a Varian Clinac 21EX was used for PHITS ver. 2.80. Themore » phantom (30 cm * 30 cm * 30 cm) filled the composition defined by ICRU(International Commission on Radiation Units) was placed at source-surface distance 100 cm. We calculated the neutron energy spectra in the surface of ICRU phantom with “pnimal” setting 1, 10, 100, 1000, 10000 and compared the total calculation time and the behavior of photon using PDD(Percentage Depth Dose) and OCR(Off-Center Ratio). Next, the cutoff energy of photon, electron and positron were investigated for the calculation efficiency with 4, 5, 6 and 7 MeV. Results: The calculation total time until the errors of neutron fluence become within 1% decreased as increasing “pnimul”. PDD and OCR showed no differences by the parameter. The calculation time setting the cutoff energy like 4, 5, 6 and 7 MeV decreased as increasing the cutoff energy. However, the errors of photon become within 1% did not decrease by the cutoff energy. Conclusion: The optimum values of “pnimul” and the cutoff energy were investigated on high-energy photon therapy. It is suggest that using the optimum “pnimul” makes the calculation efficiency. The study of the cutoff energy need more investigation.« less

  10. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field.

    PubMed

    Yang, Y M; Bednarz, B

    2013-02-21

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  11. Consistency evaluation between EGSnrc and Geant4 charged particle transport in an equilibrium magnetic field

    NASA Astrophysics Data System (ADS)

    Yang, Y. M.; Bednarz, B.

    2013-02-01

    Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.

  12. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  13. Studying the response of a plastic scintillator to gamma rays using the Geant4 Monte Carlo code.

    PubMed

    Ghadiri, Rasoul; Khorsandi, Jamshid

    2015-05-01

    To determine the gamma ray response function of an NE-102 scintillator and to investigate the gamma spectra due to the transport of optical photons, we simulated an NE-102 scintillator using Geant4 code. The results of the simulation were compared with experimental data. Good consistency between the simulation and data was observed. In addition, the time and spatial distributions, along with the energy distribution and surface treatments of scintillation detectors, were calculated. This simulation makes us capable of optimizing the photomultiplier tube (or photodiodes) position to yield the best coupling to the detector. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  15. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  16. Efficient simultaneous dense coding and teleportation with two-photon four-qubit cluster states

    NASA Astrophysics Data System (ADS)

    Zhang, Cai; Situ, Haozhen; Li, Qin; He, Guang Ping

    2016-08-01

    We firstly propose a simultaneous dense coding protocol with two-photon four-qubit cluster states in which two receivers can simultaneously get their respective classical information sent by a sender. Because each photon has two degrees of freedom, the protocol will achieve a high transmittance. The security of the simultaneous dense coding protocol has also been analyzed. Secondly, we investigate how to simultaneously teleport two different quantum states with polarization and path degree of freedom using cluster states to two receivers, respectively, and discuss its security. The preparation and transmission of two-photon four-qubit cluster states is less difficult than that of four-photon entangled states, and it has been experimentally generated with nearly perfect fidelity and high generation rate. Thus, our protocols are feasible with current quantum techniques.

  17. Electromagnetic Chirps from Neutron Star–Black Hole Mergers

    NASA Astrophysics Data System (ADS)

    Schnittman, Jeremy D.; Dal Canton, Tito; Camp, Jordan; Tsang, David; Kelly, Bernard J.

    2018-02-01

    We calculate the electromagnetic signal of a gamma-ray flare coming from the surface of a neutron star shortly before merger with a black hole companion. Using a new version of the Monte Carlo radiation transport code Pandurata that incorporates dynamic spacetimes, we integrate photon geodesics from the neutron star surface until they reach a distant observer or are captured by the black hole. The gamma-ray light curve is modulated by a number of relativistic effects, including Doppler beaming and gravitational lensing. Because the photons originate from the inspiraling neutron star, the light curve closely resembles the corresponding gravitational waveform: a chirp signal characterized by a steadily increasing frequency and amplitude. We propose to search for these electromagnetic chirps using matched filtering algorithms similar to those used in LIGO data analysis.

  18. Evaluation Of Shielding Efficacy Of A Ferrite Containing Ceramic Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verst, C.

    2015-10-12

    The shielding evaluation of the ferrite based Mitsuishi ceramic material has produced for several radiation sources and possible shielding sizes comparative dose attenuation measurements and simulated projections. High resolution gamma spectroscopy provided uncollided and scattered photon spectra at three energies, confirming theoretical estimates of the ceramic’s mass attenuation coefficient, μ/ρ. High level irradiation experiments were performed using Co-60, Cs-137, and Cf-252 sources to measure penetrating dose rates through steel, lead, concrete, and the provided ceramic slabs. The results were used to validate the radiation transport code MCNP6 which was then used to generate dose rate attenuation curves as a functionmore » of shielding material, thickness, and mass for photons and neutrons ranging in energy from 200 keV to 2 MeV.« less

  19. Electromagnetic Chirps from Neutron Star-Black Hole Mergers

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy D.; Dal Canton, Tito; Camp, Jordan B.; Tsang, David; Kelly, Bernard J.

    2018-01-01

    We calculate the electromagnetic signal of a gamma-ray flare coming from the surface of a neutron star shortly before merger with a black hole companion. Using a new version of the Monte Carlo radiation transport code Pandurata that incorporates dynamic spacetimes, we integrate photon geodesics from the neutron star surface until they reach a distant observer or are captured by the black hole. The gamma-ray light curve is modulated by a number of relativistic effects, including Doppler beaming and gravitational lensing. Because the photons originate from the inspiraling neutron star, the light curve closely resembles the corresponding gravitational waveform: a chirp signal characterized by a steadily increasing frequency and amplitude. We propose to search for these electromagnetic chirps using matched filtering algorithms similar to those used in LIGO data analysis.

  20. Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu

    2011-03-15

    Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, singlemore » Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since they are both based on the well-validated standard EGSnrc radiation transport physics model.« less

  1. An MCNP-based model of a medical linear accelerator x-ray photon beam.

    PubMed

    Ajaj, F A; Ghassal, N M

    2003-09-01

    The major components in the x-ray photon beam path of the treatment head of the VARIAN Clinac 2300 EX medical linear accelerator were modeled and simulated using the Monte Carlo N-Particle radiation transport computer code (MCNP). Simulated components include x-ray target, primary conical collimator, x-ray beam flattening filter and secondary collimators. X-ray photon energy spectra and angular distributions were calculated using the model. The x-ray beam emerging from the secondary collimators were scored by considering the total x-ray spectra from the target as the source of x-rays at the target position. The depth dose distribution and dose profiles at different depths and field sizes have been calculated at a nominal operating potential of 6 MV and found to be within acceptable limits. It is concluded that accurate specification of the component dimensions, composition and nominal accelerating potential gives a good assessment of the x-ray energy spectra.

  2. A deterministic electron, photon, proton and heavy ion transport suite for the study of the Jovian moon Europa

    NASA Astrophysics Data System (ADS)

    Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.

    2011-02-01

    A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.

  3. Pion and electromagnetic contribution to dose: Comparisons of HZETRN to Monte Carlo results and ISS data

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Blattnig, Steve R.; Reddell, Brandon; Bahadori, Amir; Norman, Ryan B.; Badavi, Francis F.

    2013-07-01

    Recent work has indicated that pion production and the associated electromagnetic (EM) cascade may be an important contribution to the total astronaut exposure in space. Recent extensions to the deterministic space radiation transport code, HZETRN, allow the production and transport of pions, muons, electrons, positrons, and photons. In this paper, the extended code is compared to the Monte Carlo codes, Geant4, PHITS, and FLUKA, in slab geometries exposed to galactic cosmic ray (GCR) boundary conditions. While improvements in the HZETRN transport formalism for the new particles are needed, it is shown that reasonable agreement on dose is found at larger shielding thicknesses commonly found on the International Space Station (ISS). Finally, the extended code is compared to ISS data on a minute-by-minute basis over a seven day period in 2001. The impact of pion/EM production on exposure estimates and validation results is clearly shown. The Badhwar-O'Neill (BO) 2004 and 2010 models are used to generate the GCR boundary condition at each time-step allowing the impact of environmental model improvements on validation results to be quantified as well. It is found that the updated BO2010 model noticeably reduces overall exposure estimates from the BO2004 model, and the additional production mechanisms in HZETRN provide some compensation. It is shown that the overestimates provided by the BO2004 GCR model in previous validation studies led to deflated uncertainty estimates for environmental, physics, and transport models, and allowed an important physical interaction (π/EM) to be overlooked in model development. Despite the additional π/EM production mechanisms in HZETRN, a systematic under-prediction of total dose is observed in comparison to Monte Carlo results and measured data.

  4. Complete event simulations of nuclear fission

    NASA Astrophysics Data System (ADS)

    Vogt, Ramona

    2015-10-01

    For many years, the state of the art for treating fission in radiation transport codes has involved sampling from average distributions. In these average fission models energy is not explicitly conserved and everything is uncorrelated because all particles are emitted independently. However, in a true fission event, the energies, momenta and multiplicities of the emitted particles are correlated. Such correlations are interesting for many modern applications. Event-by-event generation of complete fission events makes it possible to retain the kinematic information for all particles emitted: the fission products as well as prompt neutrons and photons. It is therefore possible to extract any desired correlation observables. Complete event simulations can be included in general Monte Carlo transport codes. We describe the general functionality of currently available fission event generators and compare results for several important observables. This work was performed under the auspices of the US DOE by LLNL, Contract DE-AC52-07NA27344. We acknowledge support of the Office of Defense Nuclear Nonproliferation Research and Development in DOE/NNSA.

  5. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William

    2011-01-01

    A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.

  6. Neutron flux measurements on a mock-up of a storage cask for high-level nuclear waste using 2.5 MeV neutrons.

    PubMed

    Suárez, H Saurí; Becker, F; Klix, A; Pang, B; Döring, T

    2018-06-07

    To store and dispose spent nuclear fuel, shielding casks are employed to reduce the emitted radiation. To evaluate the exposure of employees handling such casks, Monte Carlo radiation transport codes can be employed. Nevertheless, to assess the reliability of these codes and nuclear data, experimental checks are required. In this study, a neutron generator (NG) producing neutrons of 2.5 MeV was employed to simulate neutrons produced in spent nuclear fuel. Different configurations of shielding layers of steel and polyethylene were positioned between the target of the NG and a NE-213 detector. The results of the measurements of neutron and γ radiation and the corresponding simulations with the code MCNP6 are presented. Details of the experimental set-up as well as neutron and photon flux spectra are provided as reference points for such NG investigations with shielding structures.

  7. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  8. LATIS3D: The Goal Standard for Laser-Tissue-Interaction Modeling

    NASA Astrophysics Data System (ADS)

    London, R. A.; Makarewicz, A. M.; Kim, B. M.; Gentile, N. A.; Yang, T. Y. B.

    2000-03-01

    The goal of this LDRD project has been to create LATIS3D-the world's premier computer program for laser-tissue interaction modeling. The development was based on recent experience with the 2D LATIS code and the ASCI code, KULL. With LATIS3D, important applications in laser medical therapy were researched including dynamical calculations of tissue emulsification and ablation, photothermal therapy, and photon transport for photodynamic therapy. This project also enhanced LLNL's core competency in laser-matter interactions and high-energy-density physics by pushing simulation codes into new parameter regimes and by attracting external expertise. This will benefit both existing LLNL programs such as ICF and SBSS and emerging programs in medical technology and other laser applications. The purpose of this project was to develop and apply a computer program for laser-tissue interaction modeling to aid in the development of new instruments and procedures in laser medicine.

  9. Effects of photon field on heat transport through a quantum wire attached to leads

    NASA Astrophysics Data System (ADS)

    Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

    2018-01-01

    We theoretically investigate photo-thermoelectric transport through a quantum wire in a photon cavity coupled to electron reservoirs with different temperatures. Our approach, based on a quantum master equation, allows us to investigate the influence of a quantized photon field on the heat current and thermoelectric transport in the system. We find that the heat current through the quantum wire is influenced by the photon field resulting in a negative heat current in certain cases. The characteristics of the transport are studied by tuning the ratio, ħωγ /kB ΔT, between the photon energy, ħωγ, and the thermal energy, kB ΔT. The thermoelectric transport is enhanced by the cavity photons when kB ΔT > ħωγ. By contrast, if kB ΔT < ħωγ, the photon field is dominant and a suppression in the thermoelectric transport can be found in the case when the cavity-photon field is close to a resonance with the two lowest one-electron states in the system. Our approach points to a new technique to amplify thermoelectric current in nano-devices.

  10. PENGEOM-A general-purpose geometry package for Monte Carlo simulation of radiation transport in material systems defined by quadric surfaces

    NASA Astrophysics Data System (ADS)

    Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc

    2016-02-01

    The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.

  11. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  12. Monte Carlo calculation of dose rate conversion factors for external exposure to photon emitters in soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clovas, A.; Zanthos, S.; Antonopoulos-Domis, M.

    2000-03-01

    The dose rate conversion factors {dot D}{sub CF} (absorbed dose rate in air per unit activity per unit of soil mass, nGy h{sup {minus}1} per Bq kg{sup {minus}1}) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: (1) The MCNP code of Los Alamos; (2) The GEANT code of CERN; and (3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained bymore » the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the {dot D}{sub CF} values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20--30%) for the low energy photons.« less

  13. Influence of photon energy cuts on PET Monte Carlo simulation results.

    PubMed

    Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan

    2012-07-01

    The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.

  14. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  15. Methodology comparison for gamma-heating calculations in material-testing reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.

    2015-07-01

    The Jules Horowitz Reactor (JHR) is a Material-Testing Reactor (MTR) under construction in the south of France at CEA Cadarache (French Alternative Energies and Atomic Energy Commission). It will typically host about 20 simultaneous irradiation experiments in the core and in the beryllium reflector. These experiments will help us better understand the complex phenomena occurring during the accelerated ageing of materials and the irradiation of nuclear fuels. Gamma heating, i.e. photon energy deposition, is mainly responsible for temperature rise in non-fuelled zones of nuclear reactors, including JHR internal structures and irradiation devices. As temperature is a key parameter for physicalmore » models describing the behavior of material, accurate control of temperature, and hence gamma heating, is required in irradiation devices and samples in order to perform an advanced suitable analysis of future experimental results. From a broader point of view, JHR global attractiveness as a MTR depends on its ability to monitor experimental parameters with high accuracy, including gamma heating. Strict control of temperature levels is also necessary in terms of safety. As JHR structures are warmed up by gamma heating, they must be appropriately cooled down to prevent creep deformation or melting. Cooling-power sizing is based on calculated levels of gamma heating in the JHR. Due to these safety concerns, accurate calculation of gamma heating with well-controlled bias and associated uncertainty as low as possible is all the more important. There are two main kinds of calculation bias: bias coming from nuclear data on the one hand and bias coming from physical approximations assumed by computer codes and by general calculation route on the other hand. The former must be determined by comparison between calculation and experimental data; the latter by calculation comparisons between codes and between methodologies. In this presentation, we focus on this latter kind of bias. Nuclear heating is represented by the physical quantity called absorbed dose (energy deposition induced by particle-matter interactions, divided by mass). Its calculation with Monte Carlo codes is possible but computationally expensive as it requires transport simulation of charged particles, along with neutrons and photons. For that reason, the calculation of another physical quantity, called KERMA, is often preferred, as KERMA calculation with Monte Carlo codes only requires transport of neutral particles. However, KERMA is only an estimator of the absorbed dose and many conditions must be fulfilled for KERMA to be equal to absorbed dose, including so-called condition of electronic equilibrium. Also, Monte Carlo computations of absorbed dose still present some physical approximations, even though there is only a limited number of them. Some of these approximations are linked to the way how Monte Carlo codes apprehend the transport simulation of charged particles and the productive and destructive interactions between photons, electrons and positrons. There exists a huge variety of electromagnetic shower models which tackle this topic. Differences in the implementation of these models can lead to discrepancies in calculated values of absorbed dose between different Monte Carlo codes. The magnitude of order of such potential discrepancies should be quantified for JHR gamma-heating calculations. We consequently present a two-pronged plan. In a first phase, we intend to perform compared absorbed dose / KERMA Monte Carlo calculations in the JHR. This way, we will study the presence or absence of electronic equilibrium in the different JHR structures and experimental devices and we will give recommendations for the choice of KERMA or absorbed dose when calculating gamma heating in the JHR. In a second phase, we intend to perform compared TRIPOLI4 / MCNP absorbed dose calculations in a simplified JHR-representative geometry. For this comparison, we will use the same nuclear data library for both codes (the European library JEFF3.1.1 and photon library EPDL97) so as to isolate the effects from electromagnetic shower models on absorbed dose calculation. This way, we hope to get insightful feedback on these models and their implementation in Monte Carlo codes. (authors)« less

  16. Sci—Thur AM: YIS - 03: irtGPUMCD: a new GPU-calculated dosimetry code for {sup 177}Lu-octreotate radionuclide therapy of neuroendocrine tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe

    2014-08-15

    In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less

  17. Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

    2009-01-01

    Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

  18. Comparison of conversion coefficients for equivalent dose in terms of air kerma for photons using a male adult voxel simulator in sitting and standing posture with geometry of irradiation antero-posterior

    NASA Astrophysics Data System (ADS)

    Galeano, D. C.; Cavalcante, F. R.; Carvalho, A. B.; Hunt, J.

    2014-02-01

    The dose conversion coefficient (DCC) is important to quantify and assess effective doses associated with medical, professional and public exposures. The calculation of DCCs using anthropomorphic simulators and radiation transport codes is justified since in-vivo measurement of effective dose is extremely difficult and not practical for occupational dosimetry. DCCs have been published by the ICRP using simulators in a standing posture, which is not always applicable to all exposure scenarios, providing an inaccurate dose estimation. The aim of this work was to calculate DCCs for equivalent dose in terms of air kerma (H/Kair) using the Visual Monte Carlo (VMC) code and the VOXTISS8 adult male voxel simulator in sitting and standing postures. In both postures, the simulator was irradiated by a plane source of monoenergetic photons in antero-posterior (AP) geometry. The photon energy ranged from 15 keV to 2 MeV. The DCCs for both postures were compared and the DCCs for the standing simulator were higher. For certain organs, the difference of DCCs were more significant, as in gonads (48% higher), bladder (16% higher) and colon (11% higher). As these organs are positioned in the abdominal region, the posture of the anthropomorphic simulator modifies the form in which the radiation is transported and how the energy is deposited. It was also noted that the average percentage difference of conversion coefficients was 33% for the bone marrow, 11% for the skin, 13% for the bone surface and 31% for the muscle. For other organs, the percentage difference of the DCCs for both postures was not relevant (less than 5%) due to no anatomical changes in the organs of the head, chest and upper abdomen. We can conclude that is important to obtain DCCs using different postures from those present in the scientific literature.

  19. Simulation of multi-photon emission isotopes using time-resolved SimSET multiple photon history generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun

    Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generatormore » based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)« less

  20. Photon transport in a dissipative chain of nonlinear cavities

    NASA Astrophysics Data System (ADS)

    Biella, Alberto; Mazza, Leonardo; Carusotto, Iacopo; Rossini, Davide; Fazio, Rosario

    2015-05-01

    By means of numerical simulations and the input-output formalism, we study photon transport through a chain of coupled nonlinear optical cavities subject to uniform dissipation. Photons are injected from one end of the chain by means of a coherent source. The propagation through the array of cavities is sensitive to the interplay between the photon hopping strength and the local nonlinearity in each cavity. We characterize photon transport by studying the populations and the photon correlations as a function of the cavity position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single-photon and multiphoton resonances directly reflects the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. The numerical algorithm we use, based on the time-evolving block decimation scheme adapted to mixed states, allows us to simulate large arrays (up to 60 cavities). The scaling of photon transmission with the number of cavities does depend on the structure of the many-body photon states inside the array.

  1. Monte Carlo calculations of initial energies of electrons in water irradiated by photons with energies up to 1GeV.

    PubMed

    Todo, A S; Hiromoto, G; Turner, J E; Hamm, R N; Wright, H A

    1982-12-01

    Previous calculations of the initial energies of electrons produced in water irradiated by photons are extended to 1 GeV by including pair and triplet production. Calculations were performed with the Monte Carlo computer code PHOEL-3, which replaces the earlier code, PHOEL-2. Tables of initial electron energies are presented for single interactions of monoenergetic photons at a number of energies from 10 keV to 1 GeV. These tables can be used to compute kerma in water irradiated by photons with arbitrary energy spectra to 1 GeV. In addition, separate tables of Compton-and pair-electron spectra are given over this energy range. The code PHOEL-3 is available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830.

  2. The radiation dosimetry of intrathecally administered radionuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stabin, M.G.; Evans, J.F.

    The radiation dose to the spine, spinal cord, marrow, and other organs of the body from intrathecal administration of several radiopharmaceuticals was studied. Anatomic models were developed for the spine, spinal cerebrospinal fluid (CSF), spinal cord, spinal skeleton, cranial skeleton, and cranial CSF. A kinetic model for the transport of CSF was used to determine residence times in the CSF; material leaving the CSF was thereafter assumed to enter the bloodstream and follow the kinetics of the radiopharmaceutical as if intravenously administered. The radiation transport codes MCNP and ALGAMP were used to model the electron and photon transport and energymore » deposition. The dosimetry of Tc-99m DTPA and HSA, In-111 DTPA, I-131 HSA, and Yb-169 DTPA was studied. Radiation dose profiles for the spinal cord and marrow in the spine were developed and average doses to all other organs were estimated, including dose distributions within the bone and marrow.« less

  3. Fiber transport of spatially entangled photons

    NASA Astrophysics Data System (ADS)

    Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.

    2012-03-01

    High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.

  4. Modelling the transport of optical photons in scintillation detectors for diagnostic and radiotherapy imaging

    NASA Astrophysics Data System (ADS)

    Roncali, Emilie; Mosleh-Shirazi, Mohammad Amin; Badano, Aldo

    2017-10-01

    Computational modelling of radiation transport can enhance the understanding of the relative importance of individual processes involved in imaging systems. Modelling is a powerful tool for improving detector designs in ways that are impractical or impossible to achieve through experimental measurements. Modelling of light transport in scintillation detectors used in radiology and radiotherapy imaging that rely on the detection of visible light plays an increasingly important role in detector design. Historically, researchers have invested heavily in modelling the transport of ionizing radiation while light transport is often ignored or coarsely modelled. Due to the complexity of existing light transport simulation tools and the breadth of custom codes developed by users, light transport studies are seldom fully exploited and have not reached their full potential. This topical review aims at providing an overview of the methods employed in freely available and other described optical Monte Carlo packages and analytical models and discussing their respective advantages and limitations. In particular, applications of optical transport modelling in nuclear medicine, diagnostic and radiotherapy imaging are described. A discussion on the evolution of these modelling tools into future developments and applications is presented. The authors declare equal leadership and contribution regarding this review.

  5. Effects of the plasma profiles on photon and pair production in ultrahigh intensity laser solid interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Y. X.; Jin, X. L., E-mail: jinxiaolin@uestc.edu.cn; Yan, W. Z.

    The model of photon and pair production in strong field quantum electrodynamics is implemented into our 1D3V particle-in-cell code with Monte Carlo algorithm. Using this code, the evolution of the particles in ultrahigh intensity laser (∼10{sup 23} W/cm{sup 2}) interaction with aluminum foil target is observed. Four different initial plasma profiles are considered in the simulations. The effects of initial plasma profiles on photon and pair production, energy spectra, and energy evolution are analyzed. The results imply that one can set an optimal initial plasma profile to obtain the desired photon distributions.

  6. Controlling single-photon transport in an optical waveguide coupled to an optomechanical cavity with a Λ-type three-level atom

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Qing; Zhu, Zhong-Hua; Peng, Zhao-Hui; Jiang, Chun-Lei; Chai, Yi-Feng; Hai, Lian; Tan, Lei

    2018-06-01

    We theoretically study the single-photon transport along a one-dimensional optical waveguide coupled to an optomechanical cavity containing a Λ-type three-level atom. Our numerical results show that the transmission spectra of the incident photon can be well controlled by such a hybrid atom-optomechanical system. The effects of the optomechanical coupling strength, the classical laser beam applied to the atom, atom-cavity detuning, and atomic dissipation on the single-photon transport properties are analyzed. It is of particular interest that an analogous double electromagnetically induced transparency emerges in the single-photon transmission spectra.

  7. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  8. Radiation analysis for manned missions to the Jupiter system

    NASA Technical Reports Server (NTRS)

    De Angelis, G.; Clowdsley, M. S.; Nealy, J. E.; Tripathi, R. K.; Wilson, J. W.

    2004-01-01

    An analysis for manned missions targeted to the Jovian system has been performed in the framework of the NASA RASC (Revolutionary Aerospace Systems Concepts) program on Human Exploration beyond Mars. The missions were targeted to the Jupiter satellite Callisto. The mission analysis has been divided into three main phases, namely the interplanetary cruise, the Jupiter orbital insertion, and the surface landing and exploration phases. The interplanetary phase is based on departure from the Earth-Moon L1 point. Interplanetary trajectories based on the use of different propulsion systems have been considered, with resulting overall cruise phase duration varying between two and five years. The Jupiter-approach and the orbital insertion trajectories are considered in detail, with the spacecraft crossing the Jupiter radiation belts and staying around the landing target. In the surface exploration phase the stay on the Callisto surface is considered. The satellite surface composition has been modeled based on the most recent results from the GALILEO spacecraft. In the transport computations the surface backscattering has been duly taken into account. Particle transport has been performed with the HZETRN heavy ion code for hadrons and with an in-house developed transport code for electrons and bremsstrahlung photons. The obtained doses have been compared to dose exposure limits. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  9. Radiation analysis for manned missions to the Jupiter system.

    PubMed

    De Angelis, G; Clowdsley, M S; Nealy, J E; Tripathi, R K; Wilson, J W

    2004-01-01

    An analysis for manned missions targeted to the Jovian system has been performed in the framework of the NASA RASC (Revolutionary Aerospace Systems Concepts) program on Human Exploration beyond Mars. The missions were targeted to the Jupiter satellite Callisto. The mission analysis has been divided into three main phases, namely the interplanetary cruise, the Jupiter orbital insertion, and the surface landing and exploration phases. The interplanetary phase is based on departure from the Earth-Moon L1 point. Interplanetary trajectories based on the use of different propulsion systems have been considered, with resulting overall cruise phase duration varying between two and five years. The Jupiter-approach and the orbital insertion trajectories are considered in detail, with the spacecraft crossing the Jupiter radiation belts and staying around the landing target. In the surface exploration phase the stay on the Callisto surface is considered. The satellite surface composition has been modeled based on the most recent results from the GALILEO spacecraft. In the transport computations the surface backscattering has been duly taken into account. Particle transport has been performed with the HZETRN heavy ion code for hadrons and with an in-house developed transport code for electrons and bremsstrahlung photons. The obtained doses have been compared to dose exposure limits. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  10. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  11. Combined Modeling of Acceleration, Transport, and Hydrodynamic Response in Solar Flares. 1; The Numerical Model

    NASA Technical Reports Server (NTRS)

    Liu, Wei; Petrosian, Vahe; Mariska, John T.

    2009-01-01

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated the simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a -10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a non thermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.

  12. Electron linear accelerator production and purification of scandium-47 from titanium dioxide targets.

    PubMed

    Rotsch, David A; Brown, M Alex; Nolen, Jerry A; Brossard, Thomas; Henning, Walter F; Chemerisov, Sergey D; Gromov, Roman G; Greene, John

    2018-01-01

    The photonuclear production of no-carrier-added (NCA) 47 Sc from solid Nat TiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48 Ti(γ,p) 47 Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2 SO 4 in the presence of Na 2 SO 4 and 47 Sc was purified using the commercially available Eichrom DGA resin. Typical 47 Sc recovery yields were >90% with excellent specific activity for small batches (<185 MBq batches). Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Electron linear accelerator production and purification of scandium-47 from titanium dioxide targets

    DOE PAGES

    Rotsch, David A.; Brown, M. Alex; Nolen, Jerry A.; ...

    2017-11-06

    Here, the photonuclear production of no-carrier-added (NCA) 47Sc from solid NatTiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48Ti(γ,p) 47Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2SO 4 in the presence of Na 2SO 4 and 47Sc was purified using the commercially available Eichrom DGA resin. Typical 47Sc recovery yields were >90%more » with excellent specific activity for small batches (<185 MBq batches).« less

  14. Electron linear accelerator production and purification of scandium-47 from titanium dioxide targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rotsch, David A.; Brown, M. Alex; Nolen, Jerry A.

    Here, the photonuclear production of no-carrier-added (NCA) 47Sc from solid NatTiO 2 and the subsequent chemical processing and purification have been developed. Scandium-47 was produced by the 48Ti(γ,p) 47Sc reaction with Bremsstrahlung photons produced from the braking of electrons in a high-Z (W or Ta) convertor. Production yields were simulated with the PHITS code (Particle and Heavy Ion Transport-code System) and compared to experimental results. Irradiated TiO 2 targets were dissolved in fuming H 2SO 4 in the presence of Na 2SO 4 and 47Sc was purified using the commercially available Eichrom DGA resin. Typical 47Sc recovery yields were >90%more » with excellent specific activity for small batches (<185 MBq batches).« less

  15. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  16. Microfocusing of the FERMI@Elettra FEL beam with a K-B active optics system: Spot size predictions by application of the WISE code

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Svetina, C.; Mahne, N.; Cocco, D.; Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M.; De Ninno, G.; Zeitoun, P.; Dovillaire, G.; Lambert, G.; Boutu, W.; Merdji, H.; Gonzalez, A. I.; Gauthier, D.; Zangrando, M.

    2013-05-01

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10-100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens-Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization.

  17. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  18. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE PAGES

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  19. Synthetic reconstruction of recycling on the limiter during startup phase of W7-X based on EMC3-EIRENE simulations

    NASA Astrophysics Data System (ADS)

    Frerichs, Heinke; Effenberg, Florian; Schmitz, Oliver; Stephey, Laurie; W7-X Team

    2016-10-01

    Interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be a challenge due to line of sight integration effects. The EMC3-EIRENE code - a 3D fluid edge plasma and kinetic neutral gas transport code - is a suitable tool for full 3D reconstruction of such signals. A versatile synthetic diagnostic module has been developed recently which allows the realistic three dimensional setup of various plasma edge diagnostics to be captured. We present an analysis of recycling on the inboard limiter of W7-X during its startup phase in terms of a synthetic camera for Hα light observations and reconstruct the particle flux from these synthetic images based on ionization per photon coefficients (S/XB). We find that line of sight integration effects can lead to misinterpretation of data (redistribution of particle flux due to neutral gas diffusion), and that local plasma effects are important for the correct treatment of photon emissions. This work was supported by the U.S. Department of Energy (DOE) under Grant DE-SC0014210, by startup funds of the Department of Engineering Physics at the University of Wisconsin - Madison, and by the EUROfusion Consortium under Euratom Grant No 633053.

  20. Effect of the qubit relaxation on transport properties of microwave photons

    NASA Astrophysics Data System (ADS)

    Sultanov, A. N.; Greenberg, Ya. S.

    2017-11-01

    In this work, using the non-Hermitian Hamiltonian method, the transmission of a single photon in a one-dimensional waveguide interacting with the cavity containing an arbitrary number of photons and the two-level artificial atom is studied with allowance for the relaxation of the latter. For transport factors, analytical expressions which explicitly take into account the qubit relaxation parameter have been obtained. The form of the transmission (reflection) coefficient when there is more than one photon in the cavity qualitatively differs from the single-photon cavity and contains the manifestation of the photon blockade effect. The qubit lifetime depends on the number of photons in the cavity.

  1. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  2. Photon-assisted quantum transport in quantum point contacts

    NASA Astrophysics Data System (ADS)

    Hu, Qing

    1993-02-01

    We have studied the feasibility of photon-assisted quantum transport in semiconductor quantum point contacts or electron waveguides. Due to photon-induced intersubband transitions, it is expected that the drain/source conductance of the quantum point contacts can be modulated by far-infrared (f not less than 300 GHz) radiation, which is similar to the photon-assisted tunneling in superconducting tunnel junctions. An antenna/gate electrodes structure will be used to couple far-infrared photons into quantum point contacts of submicron dimensions. A calculation of the photon-induced drain/source current as a function of the far-infrared radiation power is also presented.

  3. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  4. On the effect of updated MCNP photon cross section data on the simulated response of the HPA TLD.

    PubMed

    Eakins, Jonathan

    2009-02-01

    The relative response of the new Health Protection Agency thermoluminescence dosimeter (TLD) has been calculated for Narrow Series X-ray distribution and (137)Cs photon sources using the Monte Carlo code MCNP5, and the results compared with those obtained during its design stage using the predecessor code, MCNP4c2. The results agreed at intermediate energies (approximately 0.1 MeV to (137)Cs), but differed at low energies (<0.1 MeV) by up to approximately 10%. This disparity has been ascribed to differences in the default photon interaction data used by the two codes, and derives ultimately from the effect on absorbed dose of the recent updates to the photoelectric cross sections. The sources of these data have been reviewed.

  5. Many-integrated core (MIC) technology for accelerating Monte Carlo simulation of radiation transport: A study based on the code DPM

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Brualla, L.

    2018-04-01

    Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.

  6. Monte Carlo Shielding Comparative Analysis Applied to TRIGA HEU and LEU Spent Fuel Transport

    NASA Astrophysics Data System (ADS)

    Margeanu, C. A.; Margeanu, S.; Barbos, D.; Iorgulis, C.

    2010-12-01

    The paper is a comparative study of LEU and HEU fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask, by means of 3D Monte Carlo MORSE-SGC code. Before loading into the shipping cask, TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. The actinides contribution to total fuel radioactivity is very low in HEU spent fuel case, becoming 10 times greater in LEU spent fuel case. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than HEU ones. Comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from cask surface (about 15% relative difference).

  7. Remanent Activation in the Mini-SHINE Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micklich, Bradley J.

    2015-04-16

    Argonne National Laboratory is assisting SHINE Medical Technologies in developing a domestic source of the medical isotope 99Mo through the fission of low-enrichment uranium in a uranyl sulfate solution. In Phase 2 of these experiments, electrons from a linear accelerator create neutrons by interacting in a depleted uranium target, and these neutrons are used to irradiate the solution. The resulting neutron and photon radiation activates the target, the solution vessels, and a shielded cell that surrounds the experimental apparatus. When the experimental campaign is complete, the target must be removed into a shielding cask, and the experimental components must bemore » disassembled. The radiation transport code MCNPX and the transmutation code CINDER were used to calculate the radionuclide inventories of the solution, the target assembly, and the shielded cell, and to determine the dose rates and shielding requirements for selected removal scenarios for the target assembly and the solution vessels.« less

  8. Robust light transport in non-Hermitian photonic lattices

    PubMed Central

    Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della

    2015-01-01

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932

  9. Robust light transport in non-Hermitian photonic lattices.

    PubMed

    Longhi, Stefano; Gatti, Davide; Della Valle, Giuseppe

    2015-08-28

    Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.

  10. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  11. Comparison study of photon attenuation characteristics of Lead-Boron Polyethylene by MCNP code, XCOM and experimental data

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jia, Mingchun; Gong, Junjun; Xia, Wenming

    2017-08-01

    The linear attenuation coefficient, mass attenuation coefficient and mean free path of various Lead-Boron Polyethylene (PbBPE) samples which can be used as the photon shielding materials in marine reactor have been simulated using the Monte Carlo N-Particle (MCNP)-5 code. The MCNP simulation results are in good agreement with the XCOM values and the reported experimental data for source Cesium-137 and Cobalt-60. Thus, this method based on MCNP can be used to simulate the photon attenuation characteristics of various types of PbBPE materials.

  12. Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweezy, Jeremy Ed

    2016-01-21

    The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less

  13. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  14. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lougovski, P.; Uskov, D. B.

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  16. Observation of valley-selective microwave transport in photonic crystals

    NASA Astrophysics Data System (ADS)

    Ye, Liping; Yang, Yuting; Hong Hang, Zhi; Qiu, Chunyin; Liu, Zhengyou

    2017-12-01

    Recently, the discrete valley degree of freedom has attracted extensive attention in condensed matter physics. Here, we present an experimental observation of the intriguing valley transport for microwaves in photonic crystals, including the bulk valley transport and the valley-projected edge modes along the interface separating different photonic insulating phases. For both cases, valley-selective excitations are realized by a point-like chiral source located at proper locations inside the samples. Our results are promising for exploring unprecedented routes to manipulate microwaves.

  17. Experimental implementation of the Bacon-Shor code with 10 entangled photons

    NASA Astrophysics Data System (ADS)

    Gimeno-Segovia, Mercedes; Sanders, Barry C.

    The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).

  18. Nanoporous hard data: optical encoding of information within nanoporous anodic alumina photonic crystals

    NASA Astrophysics Data System (ADS)

    Santos, Abel; Law, Cheryl Suwen; Pereira, Taj; Losic, Dusan

    2016-04-01

    Herein, we present a method for storing binary data within the spectral signature of nanoporous anodic alumina photonic crystals. A rationally designed multi-sinusoidal anodisation approach makes it possible to engineer the photonic stop band of nanoporous anodic alumina with precision. As a result, the transmission spectrum of these photonic nanostructures can be engineered to feature well-resolved and selectively positioned characteristic peaks across the UV-visible spectrum. Using this property, we implement an 8-bit binary code and assess the versatility and capability of this system by a series of experiments aiming to encode different information within the nanoporous anodic alumina photonic crystals. The obtained results reveal that the proposed nanosized platform is robust, chemically stable, versatile and has a set of unique properties for data storage, opening new opportunities for developing advanced nanophotonic tools for a wide range of applications, including sensing, photonic tagging, self-reporting drug releasing systems and secure encoding of information.Herein, we present a method for storing binary data within the spectral signature of nanoporous anodic alumina photonic crystals. A rationally designed multi-sinusoidal anodisation approach makes it possible to engineer the photonic stop band of nanoporous anodic alumina with precision. As a result, the transmission spectrum of these photonic nanostructures can be engineered to feature well-resolved and selectively positioned characteristic peaks across the UV-visible spectrum. Using this property, we implement an 8-bit binary code and assess the versatility and capability of this system by a series of experiments aiming to encode different information within the nanoporous anodic alumina photonic crystals. The obtained results reveal that the proposed nanosized platform is robust, chemically stable, versatile and has a set of unique properties for data storage, opening new opportunities for developing advanced nanophotonic tools for a wide range of applications, including sensing, photonic tagging, self-reporting drug releasing systems and secure encoding of information. Electronic supplementary information (ESI) available: Further details about anodisation profiles, SEM cross-section images, digital pictures, transmission spectra, photonic barcodes and ASCII codes of the different NAA photonic crystals fabricated and analysed in our study. See DOI: 10.1039/c6nr01068g

  19. Smart photonic networks and computer security for image data

    NASA Astrophysics Data System (ADS)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  20. AN ASSESSMENT OF MCNP WEIGHT WINDOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. HENDRICKS; C. N. CULBERTSON

    2000-01-01

    The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less

  1. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  2. The radiation environment on the Moon from galactic cosmic rays in a lunar habitat.

    PubMed

    Jia, Y; Lin, Z W

    2010-02-01

    We calculated how the radiation environment in a habitat on the surface of the Moon would have depended on the thickness of the habitat in the 1977 galactic cosmic-ray environment. The Geant4 Monte Carlo transport code was used, and a hemispherical dome made of lunar regolith was used to simulate the lunar habitat. We investigated the effective dose from primary and secondary particles including nuclei from protons up to nickel, neutrons, charged pions, photons, electrons and positrons. The total effective dose showed a strong decrease with the thickness of the habitat dome. However, the effective dose values from secondary neutrons, charged pions, photons, electrons and positrons all showed a strong increase followed by a gradual decrease with the habitat thickness. The fraction of the summed effective dose from these secondary particles in the total effective dose increased with the habitat thickness, from approximately 5% for the no-habitat case to about 47% for the habitat with an areal thickness of 100 g/cm(2).

  3. Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.

    PubMed

    Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G

    2006-08-01

    Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.

  4. SU-E-T-132: Assess the Shielding of Secondary Neutrons From Patient Collimator in Proton Therapy Considering Secondary Photons Generated in the Shielding Process with Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamanaka, M; Takashina, M; Kurosu, K

    Purpose: In this study we present Monte Carlo based evaluation of the shielding effect for secondary neutrons from patient collimator, and secondary photons emitted in the process of neutron shielding by combination of moderator and boron-10 placed around patient collimator. Methods: The PHITS Monte Carlo Simulation radiation transport code was used to simulate the proton beam (Ep = 64 to 93 MeV) from a proton therapy facility. In this study, moderators (water, polyethylene and paraffin) and boron (pure {sup 10}B) were placed around patient collimator in this order. The rate of moderator and boron thicknesses was changed fixing the totalmore » thickness at 3cm. The secondary neutron and photons doses were evaluated as the ambient dose equivalent per absorbed dose [H*(10)/D]. Results: The secondary neutrons are shielded more effectively by combination moderators and boron. The most effective combination of shielding neutrons is the polyethylene of 2.4 cm thick and the boron of 0.6 cm thick and the maximum reduction rate is 47.3 %. The H*(10)/D of secondary photons in the control case is less than that of neutrons by two orders of magnitude and the maximum increase of secondary photons is 1.0 µSv/Gy with the polyethylene of 2.8 cm thick and the boron of 0.2 cm thick. Conclusion: The combination of moderators and boron is beneficial for shielding secondary neutrons. Both the secondary photons of control and those emitted in the shielding neutrons are very lower than the secondary neutrons and photon has low RBE in comparison with neutron. Therefore the secondary photons can be ignored in the shielding neutrons.This work was supported by JSPS Core-to-Core Program (No.23003). This work was supported by JSPS Core-to-Core Program (No.23003)« less

  5. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  6. The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results

    NASA Technical Reports Server (NTRS)

    Katz, J.

    1982-01-01

    The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.

  7. Computing Temperatures in Optically Thick Protoplanetary Disks

    NASA Technical Reports Server (NTRS)

    Capuder, Lawrence F.. Jr.

    2011-01-01

    We worked with a Monte Carlo radiative transfer code to simulate the transfer of energy through protoplanetary disks, where planet formation occurs. The code tracks photons from the star into the disk, through scattering, absorption and re-emission, until they escape to infinity. High optical depths in the disk interior dominate the computation time because it takes the photon packet many interactions to get out of the region. High optical depths also receive few photons and therefore do not have well-estimated temperatures. We applied a modified random walk (MRW) approximation for treating high optical depths and to speed up the Monte Carlo calculations. The MRW is implemented by calculating the average number of interactions the photon packet will undergo in diffusing within a single cell of the spatial grid and then updating the packet position, packet frequencies, and local radiation absorption rate appropriately. The MRW approximation was then tested for accuracy and speed compared to the original code. We determined that MRW provides accurate answers to Monte Carlo Radiative transfer simulations. The speed gained from using MRW is shown to be proportional to the disk mass.

  8. Transverse angular momentum in topological photonic crystals

    NASA Astrophysics Data System (ADS)

    Deng, Wei-Min; Chen, Xiao-Dong; Zhao, Fu-Li; Dong, Jian-Wen

    2018-01-01

    Engineering local angular momentum of structured light fields in real space enables applications in many fields, in particular, the realization of unidirectional robust transport in topological photonic crystals with a non-trivial Berry vortex in momentum space. Here, we show transverse angular momentum modes in silicon topological photonic crystals when considering transverse electric polarization. Excited by a chiral external source with either transverse spin angular momentum or transverse phase vortex, robust light flow propagating along opposite directions is observed in several kinds of sharp-turn interfaces between two topologically-distinct silicon photonic crystals. A transverse orbital angular momentum mode with alternating phase vortex exists at the boundary of two such photonic crystals. In addition, unidirectional transport is robust to the working frequency even when the ring size or location of the pseudo-spin source varies in a certain range, leading to the superiority of the broadband photonic device. These findings enable one to make use of transverse angular momentum, a kind of degree of freedom, to achieve unidirectional robust transport in the telecom region and other potential applications in integrated photonic circuits, such as on-chip robust delay lines.

  9. Update On the Status of the FLUKA Monte Carlo Transport Code*

    NASA Technical Reports Server (NTRS)

    Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.

    2006-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.

  10. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  11. Periodically modulated single-photon transport in one-dimensional waveguide

    NASA Astrophysics Data System (ADS)

    Li, Xingmin; Wei, L. F.

    2018-03-01

    Single-photon transport along a one-dimension waveguide interacting with a quantum system (e.g., two-level atom) is a very useful and meaningful simplified model of the waveguide-based optical quantum devices. Thus, how to modulate the transport of the photons in the waveguide structures by adjusting certain external parameters should be particularly important. In this paper, we discuss how such a modulation could be implemented by periodically driving the energy splitting of the interacting atom and the atom-photon coupling strength. By generalizing the well developed time-independent full quantum mechanical theory in real space to the time-dependent one, we show that various sideband-transmission phenomena could be observed. This means that, with these modulations the photon has certain probabilities to transmit through the scattering atom in the other energy sidebands. Inversely, by controlling the sideband transmission the periodic modulations of the single photon waveguide devices could be designed for the future optical quantum information processing applications.

  12. Generating multi-photon W-like states for perfect quantum teleportation and superdense coding

    NASA Astrophysics Data System (ADS)

    Li, Ke; Kong, Fan-Zhen; Yang, Ming; Ozaydin, Fatih; Yang, Qing; Cao, Zhuo-Liang

    2016-08-01

    An interesting aspect of multipartite entanglement is that for perfect teleportation and superdense coding, not the maximally entangled W states but a special class of non-maximally entangled W-like states are required. Therefore, efficient preparation of such W-like states is of great importance in quantum communications, which has not been studied as much as the preparation of W states. In this paper, we propose a simple optical scheme for efficient preparation of large-scale polarization-based entangled W-like states by fusing two W-like states or expanding a W-like state with an ancilla photon. Our scheme can also generate large-scale W states by fusing or expanding W or even W-like states. The cost analysis shows that in generating large-scale W states, the fusion mechanism achieves a higher efficiency with non-maximally entangled W-like states than maximally entangled W states. Our scheme can also start fusion or expansion with Bell states, and it is composed of a polarization-dependent beam splitter, two polarizing beam splitters and photon detectors. Requiring no ancilla photon or controlled gate to operate, our scheme can be realized with the current photonics technology and we believe it enable advances in quantum teleportation and superdense coding in multipartite settings.

  13. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  14. Photonic sensor applications in transportation security

    NASA Astrophysics Data System (ADS)

    Krohn, David A.

    2007-09-01

    There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.

  15. Characterization of a plasma photonic crystal using a multi-fluid plasma model

    NASA Astrophysics Data System (ADS)

    Thomas, W. R.; Shumlak, U.; Wang, B.; Righetti, F.; Cappelli, M. A.; Miller, S. T.

    2017-10-01

    Plasma photonic crystals have the potential to significantly expand the capabilities of current microwave filtering and switching technologies by providing high speed (μs) control of energy band-gap/pass characteristics in the GHz through low THz range. While photonic crystals consisting of dielectric, semiconductor, and metallic matrices have seen thousands of articles published over the last several decades, plasma-based photonic crystals remain a relatively unexplored field. Numerical modeling efforts so far have largely used the standard methods of analysis for photonic crystals (the Plane Wave Expansion Method, Finite Difference Time Domain, and ANSYS finite element electromagnetic code HFSS), none of which capture nonlinear plasma-radiation interactions. In this study, a 5N-moment multi-fluid plasma model is implemented using University of Washington's WARPXM finite element multi-physics code. A two-dimensional plasma-vacuum photonic crystal is simulated and its behavior is characterized through the generation of dispersion diagrams and transmission spectra. These results are compared with theory, experimental data, and ANSYS HFSS simulation results. This research is supported by a Grant from United States Air Force Office of Scientific Research.

  16. The statistical fluctuation study of quantum key distribution in means of uncertainty principle

    NASA Astrophysics Data System (ADS)

    Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei

    2018-03-01

    Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.

  17. COMBINED MODELING OF ACCELERATION, TRANSPORT, AND HYDRODYNAMIC RESPONSE IN SOLAR FLARES. I. THE NUMERICAL MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Wei; Petrosian, Vahe; Mariska, John T.

    2009-09-10

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self-consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated themore » simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a {approx}10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a nonthermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.« less

  18. Single-photon transport through a waveguide coupling to a quadratic optomechanical system

    NASA Astrophysics Data System (ADS)

    Qiao, Lei

    2017-07-01

    We study the coherent transport of a single photon, which propagates in a one-dimensional waveguide and is scattered by a quadratic optomechanical system. Our approach, which is based on the Lippmann-Schwinger equation, gives an analytical solution to describe the single-photon transmission and reflection properties. We analyze the transport spectra and find they are not only related to the optomechanical system's energy-level structure, but also dependent on the optomechanical system's inherent parameters. For the existence of atomic degrees of freedom, we get a Rabi-splitting-like or an electromagnetically induced transparency (EIT)-like spectrum, depending on the atom-cavity coupling strength. Here, we focus on the single-photon strong-coupling regime so that single-quantum effects could be seen.

  19. The Sydney University PAPA camera

    NASA Astrophysics Data System (ADS)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  20. Ultralow Noise Monolithic Quantum Dot Photonic Oscillators

    DTIC Science & Technology

    2013-10-28

    HBCU/MI) ULTRALOW NOISE MONOLITHIC QUANTUM DOT PHOTONIC OSCILLATORS LUKE LESTER UNIVERSITY OF NEW MEXICO 10/28/2013 Final Report DISTRIBUTION A...TELEPHONE NUMBER (Include area code) 24-10-2013 Final 01-06-2010 to 31-05-2013 Ultralow Noise Monolithic Quantum Dot Photonic Oscillators FA9550-10-1-0276...277-7647 Reset Grant Title: ULTRALOW NOISE MONOLITHIC QUANTUM DOT PHOTONIC OSCILLATORS Grant/Contract Number: FA9550-10-1-0276 Final Performance

  1. Calculation of conversion coefficients for clinical photon spectra using the MCNP code.

    PubMed

    Lima, M A F; Silva, A X; Crispim, V R

    2004-01-01

    In this work, the MCNP4B code has been employed to calculate conversion coefficients from air kerma to the ambient dose equivalent, H*(10)/Ka, for monoenergetic photon energies from 10 keV to 50 MeV, assuming the kerma approximation. Also estimated are the H*(10)/Ka for photon beams produced by linear accelerators, such as Clinac-4 and Clinac-2500, after transmission through primary barriers of radiotherapy treatment rooms. The results for the conversion coefficients for monoenergetic photon energies, with statistical uncertainty <2%, are compared with those in ICRP publication 74 and good agreements were obtained. The conversion coefficients calculated for real clinic spectra transmitted through walls of concrete of 1, 1.5 and 2 m thick, are in the range of 1.06-1.12 Sv Gy(-1).

  2. Air-kerma strength determination of a miniature x-ray source for brachytherapy applications

    NASA Astrophysics Data System (ADS)

    Davis, Stephen D.

    A miniature x-ray source has been developed by Xoft Inc. for high dose-rate brachytherapy treatments. The source is contained in a 5.4 mm diameter water-cooling catheter. The source voltage can be adjusted from 40 kV to 50 kV and the beam current is adjustable up to 300 muA. Electrons are accelerated toward a tungsten-coated anode to produce a lightly-filtered bremsstrahlung photon spectrum. The sources were initially used for early-stage breast cancer treatment using a balloon applicator. More recently, Xoft Inc. has developed vaginal and surface applicators. The miniature x-ray sources have been characterized using a modification of the American Association of Physicists in Medicine Task Group No. 43 formalism normally used for radioactive brachytherapy sources. Primary measurements of air kerma were performed using free-air ionization chambers at the University of Wisconsin (UW) and the National Institute of Standards and Technology (NIST). The measurements at UW were used to calibrate a well-type ionization chamber for clinical verification of source strength. Accurate knowledge of the emitted photon spectrum was necessary to calculate the corrections required to determine air-kerma strength, defined in vacuo. Theoretical predictions of the photon spectrum were calculated using three separate Monte Carlo codes: MCNP5, EGSnrc, and PENELOPE. Each code used different implementations of the underlying radiological physics. Benchmark studies were performed to investigate these differences in detail. The most important variation among the codes was found to be the calculation of fluorescence photon production following electron-induced vacancies in the L shell of tungsten atoms. The low-energy tungsten L-shell fluorescence photons have little clinical significance at the treatment distance, but could have a large impact on air-kerma measurements. Calculated photon spectra were compared to spectra measured with high-purity germanium spectroscopy systems at both UW and NIST. The effects of escaped germanium fluorescence photons and Compton-scattered photons were taken into account for the UW measurements. The photon spectrum calculated using the PENELOPE Monte Carlo code had the best agreement with the spectrum measured at NIST. Corrections were applied to the free-air chamber measurements to arrive at an air-kerma strength determination for the miniature x-ray sources.

  3. National Photonics Skills Standard for Technicians.

    ERIC Educational Resources Information Center

    Center for Occupational Research and Development, Inc., Waco, TX.

    This document defines "photonics" as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of photonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's…

  4. Neutron spectrometry in a mixed field of neutrons and protons with a phoswich neutron detector Part I: response functions for photons and neutrons of the phoswich neutron detector

    NASA Astrophysics Data System (ADS)

    Takada, M.; Taniguchi, S.; Nakamura, T.; Nakao, N.; Uwamino, Y.; Shibata, T.; Fujitaka, K.

    2001-06-01

    We have developed a phoswich neutron detector consisting of an NE213 liquid scintillator surrounded by an NE115 plastic scintillator to distinguish photon and neutron events in a charged-particle mixed field. To obtain the energy spectra by unfolding, the response functions to neutrons and photons were obtained by the experiment and calculation. The response functions to photons were measured with radionuclide sources, and were calculated with the EGS4-PRESTA code. The response functions to neutrons were measured with a white neutron source produced by the bombardment of 135 MeV protons onto a Be+C target using a TOF method, and were calculated with the SCINFUL code, which we revised in order to calculate neutron response functions up to 135 MeV. Based on these experimental and calculated results, response matrices for photons up to 20 MeV and neutrons up to 132 MeV could finally be obtained.

  5. Optical control of spin-dependent thermal transport in a quantum ring

    NASA Astrophysics Data System (ADS)

    Abdullah, Nzar Rauf

    2018-05-01

    We report on calculation of spin-dependent thermal transport through a quantum ring with the Rashba spin-orbit interaction. The quantum ring is connected to two electron reservoirs with different temperatures. Tuning the Rashba coupling constant, degenerate energy states are formed leading to a suppression of the heat and thermoelectric currents. In addition, the quantum ring is coupled to a photon cavity with a single photon mode and linearly polarized photon field. In a resonance regime, when the photon energy is approximately equal to the energy spacing between two lowest degenerate states of the ring, the polarized photon field can significantly control the heat and thermoelectric currents in the system. The roles of the number of photon initially in the cavity, and electron-photon coupling strength on spin-dependent heat and thermoelectric currents are presented.

  6. Advanced Numerical and Theoretical Methods for Photonic Crystals and Metamaterials

    NASA Astrophysics Data System (ADS)

    Felbacq, Didier

    2016-11-01

    This book provides a set of theoretical and numerical tools useful for the study of wave propagation in metamaterials and photonic crystals. While concentrating on electromagnetic waves, most of the material can be used for acoustic (or quantum) waves. For each presented numerical method, numerical code written in MATLAB® is presented. The codes are limited to 2D problems and can be easily translated in Python or Scilab, and used directly with Octave as well.

  7. External dose-rate conversion factors of radionuclides for air submersion, ground surface contamination and water immersion based on the new ICRP dosimetric setting.

    PubMed

    Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong

    2013-01-01

    For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13.

  8. Analyzing non-LTE Kr plasmas produced in high energy density experiments: from the Z machine to the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Dasgupta, Arati

    2015-11-01

    Designing high fluence photon sources above 10 keV are a challenge for High Energy Density plasmas. This has motivated radiation source development investigations of Kr with K-shell energies around 13 keV. Recent pulsed power driven gas-puff experiments on the refurbished Z machine at Sandia have produced intense X-rays in the multi-keV photon energy range. K-shell radiative yields and efficiencies are very high for Ar, but rapidly decrease for higher atomic number (ZA) elements such as Kr. It has been suggested that an optimum exists corresponding to a trade-off between the increase of photon energy for higher ZA elements and the corresponding fall off in radiative power. However the conversion efficiency on NIF, where the drive, energy deposition process, and target dynamics are different, does not fall off with higher ZA as rapidly as on Z. We have developed detailed atomic structure and collisional data for the full K-, L- and partial M-shell of Kr using the Flexible Atomic Code (FAC). Our non-LTE atomic model includes all collisional and recombination processes, including state-specific dielectronic recombination (DR), that significantly affect ionization balance and spectra of Kr plasmas at the temperatures and densities of concern. The model couples ionization physics, radiation production and transport, and magnetohydrodynamics. In this talk, I will give a detailed description of the model and discuss 1D Kr simulations employing a multifrequency radiation transport scheme. Synthetic K- and L-shell spectra will be compared with available experimental data. This talk will analyze experimental data indicative of the differences between Z and NIF experimental data and discuss how they affect the K-shell radiative output of Kr plasma. Work supported by DOE/NNSA.

  9. External dose-rate conversion factors of radionuclides for air submersion, ground surface contamination and water immersion based on the new ICRP dosimetric setting

    PubMed Central

    Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong

    2013-01-01

    For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13. PMID:23542764

  10. Valley photonic crystals for control of spin and topology

    NASA Astrophysics Data System (ADS)

    Dong, Jian-Wen; Chen, Xiao-Dong; Zhu, Hanyu; Wang, Yuan; Zhang, Xiang

    2017-03-01

    Photonic crystals offer unprecedented opportunity for light manipulation and applications in optical communication and sensing. Exploration of topology in photonic crystals and metamaterials with non-zero gauge field has inspired a number of intriguing optical phenomena such as one-way transport and Weyl points. Recently, a new degree of freedom, valley, has been demonstrated in two-dimensional materials. Here, we propose a concept of valley photonic crystals with electromagnetic duality symmetry but broken inversion symmetry. We observe photonic valley Hall effect originating from valley-dependent spin-split bulk bands, even in topologically trivial photonic crystals. Valley-spin locking behaviour results in selective net spin flow inside bulk valley photonic crystals. We also show the independent control of valley and topology in a single system that has been long pursued in electronic systems, resulting in topologically-protected flat edge states. Valley photonic crystals not only offer a route towards the observation of non-trivial states, but also open the way for device applications in integrated photonics and information processing using spin-dependent transportation.

  11. Valley photonic crystals for control of spin and topology.

    PubMed

    Dong, Jian-Wen; Chen, Xiao-Dong; Zhu, Hanyu; Wang, Yuan; Zhang, Xiang

    2017-03-01

    Photonic crystals offer unprecedented opportunity for light manipulation and applications in optical communication and sensing. Exploration of topology in photonic crystals and metamaterials with non-zero gauge field has inspired a number of intriguing optical phenomena such as one-way transport and Weyl points. Recently, a new degree of freedom, valley, has been demonstrated in two-dimensional materials. Here, we propose a concept of valley photonic crystals with electromagnetic duality symmetry but broken inversion symmetry. We observe photonic valley Hall effect originating from valley-dependent spin-split bulk bands, even in topologically trivial photonic crystals. Valley-spin locking behaviour results in selective net spin flow inside bulk valley photonic crystals. We also show the independent control of valley and topology in a single system that has been long pursued in electronic systems, resulting in topologically-protected flat edge states. Valley photonic crystals not only offer a route towards the observation of non-trivial states, but also open the way for device applications in integrated photonics and information processing using spin-dependent transportation.

  12. Optimizing the use of a sensor resource for opponent polarization coding

    PubMed Central

    Heras, Francisco J.H.

    2017-01-01

    Flies use specialized photoreceptors R7 and R8 in the dorsal rim area (DRA) to detect skylight polarization. R7 and R8 form a tiered waveguide (central rhabdomere pair, CRP) with R7 on top, filtering light delivered to R8. We examine how the division of a given resource, CRP length, between R7 and R8 affects their ability to code polarization angle. We model optical absorption to show how the length fractions allotted to R7 and R8 determine the rates at which they transduce photons, and correct these rates for transduction unit saturation. The rates give polarization signal and photon noise in R7, and in R8. Their signals are combined in an opponent unit, intrinsic noise added, and the unit’s output analysed to extract two measures of coding ability, number of discriminable polarization angles and mutual information. A very long R7 maximizes opponent signal amplitude, but codes inefficiently due to photon noise in the very short R8. Discriminability and mutual information are optimized by maximizing signal to noise ratio, SNR. At lower light levels approximately equal lengths of R7 and R8 are optimal because photon noise dominates. At higher light levels intrinsic noise comes to dominate and a shorter R8 is optimum. The optimum R8 length fractions falls to one third. This intensity dependent range of optimal length fractions corresponds to the range observed in different fly species and is not affected by transduction unit saturation. We conclude that a limited resource, rhabdom length, can be divided between two polarization sensors, R7 and R8, to optimize opponent coding. We also find that coding ability increases sub-linearly with total rhabdom length, according to the law of diminishing returns. Consequently, the specialized shorter central rhabdom in the DRA codes polarization twice as efficiently with respect to rhabdom length than the longer rhabdom used in the rest of the eye. PMID:28316880

  13. Numerical Radiative Transfer and the Hydrogen Reionization of the Universe

    NASA Astrophysics Data System (ADS)

    Petkova, M.

    2011-03-01

    One of the most interesting questions in cosmology is to understand how the Universe evolved from its nearly uniform and simple state briefly after the Big Bang to the complex state we see around us today. In particular, we would like to explain how galaxies have formed, and why they have the properties that we observe in the local Universe. Computer simulations play a highly important role in studying these questions, because they allow one to follow the dynamical equations of gravity and hydrodynamics well into the non-linear regime of the growth of cosmic structures. The current generation of simulation codes for cosmological structure formation calculates the self-gravity of dark matter and cosmic gas, and the fluid dynamics of the cosmic gas, but radiation processes are typically not taken into account, or only at the level of a spatially uniform, externally imposed background field. However, we know that the radiation field has been highly inhomogeneous during certain phases of the growth of structure, and may have in fact provided important feedback effects for galaxy formation. In particular, it is well established that the diffuse gas in the universe was nearly fully neutral after recombination at very high redshift, but today this gas is highly ionized. Sometime during the evolution, a transition to the ionized state must have occurred, a process we refer to as reionization. The UV radiation responsible for this reionization is now permeating the universe and may in part explain why small dwarf galaxies have so low luminosities. It is therefore clear that accurate and self-consistent studies of galaxy formation and of the dynamics of the reionization process should ideally be done with simulation codes that directly include a treatment of radiative transfer, and that account for all relevant source and sink terms of the radiation. We present a novel numerical implementation of radiative transfer in the cosmological smoothed particle hydrodynamics (SPH) simulation code GADGET. It is based on a fast, robust and photon-conserving integration scheme where the radiation transport problem is approximated in terms of moments of the transfer equation and by using a variable Eddington tensor as a closure relation, following the "OTVET"-suggestion of Gnedin & Abel. We derive a suitable anisotropic diffusion operator for use in the SPH discretization of the local photon transport, and we combine this with an implicit solver that guarantees robustness and photon conservation. This entails a matrix inversion problem of a huge, sparsely populated matrix that is distributed in memory in our parallel code. We solve this task iteratively with a conjugate gradient scheme. Finally, to model photon sink processes we consider ionization and recombination processes of hydrogen, which is represented with a chemical network that is evolved with an implicit time integration scheme. We present several tests of our implementation, including single and multiple sources in static uniform density fields with and without temperature evolution, shadowing by a dense clump, and multiple sources in a static cosmological density field. All tests agree quite well with analytical computations or with predictions from other radiative transfer codes, except for shadowing. However, unlike most other radiative transfer codes presently in use for studying reionization, our new method can be used on-the-fly during dynamical cosmological simulations, allowing simultaneous treatments of galaxy formation and the reionization process of the Universe. We carry out hydrodynamical simulations of galaxy formation that simultaneously follow radiative transfer of hydrogen-ionizing photons, based on the optically-thin variable Eddington tensor approximation as implemented in the GADGET code. We consider only star-forming galaxies as sources and examine to what extent they can yield a reasonable reionization history and thermal state of the intergalactic medium at redshifts around z~3. This serves as an important benchmark for our self-consistent methodology to simulate galaxy formation and reionization, and for future improvements through accounting of other sources and other wavelength ranges. We find that star formation alone is sufficient for rinsing the Universe by redshift z~6. For a suitable choice of the escape fraction and the heating efficiency, our models are approximately able to account at the same time for the one-point function and the power spectrum of the Lyman-Forest. The radiation field has an important impact on the star formation rate density in our simulations and significantly lowers the gaseous and stellar fractions in low-mass dark matter halos. Our results thus directly demonstrate the importance of radiative feedback for galaxy formation. In search for even better and more accurate methods we introduce a numerical implementation of radiative transfer based on an explicitly photon conserving advection scheme, where radiative fluxes over the cell interfaces of a structured or unstructured mesh are calculated with a second-order reconstruction of the intensity field. The approach employs a direct discretization of the radiative transfer equation in Boltzmann form with adjustable angular resolution that in principle works equally well in the optically thin and optically thick regimes. In our most general formulation of the scheme, the local radiation field is decomposed into a linear sum of directional bins of equal solid-angle, tessellating the unit sphere. Each of these "cone-fields" is transported independently, with constant intensity as a function of direction within the cone. Photons propagate at the speed of light (or optionally using a reduced speed of light approximation to allow larger timesteps), yielding a fully time-dependent solution of the radiative transfer equation that can naturally cope with an arbitrary number of sources, as well as with scattering. The method casts sharp shadows, subject to the limitations induced by the adopted angular resolution. If the number of point sources is small and scattering is unimportant, our implementation can alternatively treat each source exactly in angular space, producing shadows whose sharpness is only limited by the grid resolution. A third hybrid alternative is to treat only a small number of the locally most luminous point sources explicitly, with the rest of the radiation intensity followed in a radiative diffusion approximation. We have implemented the method in the moving-mesh code AREPO, where it is coupled to the hydrodynamics in an operator splitting approach that subcycles the radiative transfer alternatingly with the hydrodynamical evolution steps. We also discuss our treatment of basic photon sink processes relevant for cosmological reionization, with a chemical network that can accurately deal with non-equilibrium effects. We discuss several tests of the new method, including shadowing configurations in two and three dimensions, ionized sphere expansion in static and dynamic density field and the ionization of a cosmological density field. The tests agree favorably with analytic expectations and results based on other numerical radiative transfer approximations. We compare how our schemes perform in a simulation of hydrogen reionization, excluding stellar winds due to development issues. The underlying cosmological simulation codes produce different star formation rate histories, which results in a different total photon budget. As a consequence reionization in GADGET happens at a higher redshift, i.e. sooner, than in AREPO. The lower number of ionizing photons in the latter code results in a higher volume-averaged neutral fraction at redshift z = 3 and a different temperature state of the baryonic gas. We find that in both reionization scenarios the baryon fraction of low mass dark matter halos is reduced due to photoheating processes and observe that the change is bigger in the GADGET simulation than in the AREPO one, which is due to the higher ionized fractions we find the in former. Both simulations compare marginally well with the Lyman-poorest observations at redshift z = 3, but results are not expected to be in very good agr! eement due the lack of the essential feedback from stellar winds in the simulations. Finally, we can conclude that despite the differences between the two realizations, both codes perform well at the given problem and are suitable for studying the process of reionization because they produce sensible results in the limits of observations. We emphasize that the reionization history depends strongly on the star formation rate density in the simulations and which should therefore be accurately reproduced.

  14. Dissociative recombination and electron-impact de-excitation in CH photon emission under ITER divertor-relevant plasma conditions

    NASA Astrophysics Data System (ADS)

    van Swaaij, G. A.; Bystrov, K.; Borodin, D.; Kirschner, A.; van der Vegt, L. B.; van Rooij, G. J.; De Temmerman, G.; Goedheer, W. J.

    2012-09-01

    For understanding carbon erosion and redeposition in nuclear fusion devices, it is important to understand the transport and chemical break-up of hydrocarbon molecules in edge plasmas, often diagnosed by emission of the CH A 2Δ-X 2Π Gerö band around 430 nm. The CH A-level can be excited either by electron-impact (EI) or by dissociative recombination (DR) of hydrocarbon ions. These processes were included in the 3D Monte Carlo impurity transport code ERO. A series of methane injection experiments was performed in the high-density, low-temperature linear plasma generator Pilot-PSI, and simulated emission intensity profiles were benchmarked against these experiments. It was confirmed that excitation by DR dominates at Te < 1.5 eV. The results indicate that the fraction of DR events that lead to a CH radical in the A-level and consequent photon emission is at least 10%. Additionally, quenching of the excited CH radicals by EI de-excitation was included in the modeling. This quenching is shown to be significant: depending on the electron density, it reduces the effective CH emission by a factor of 1.4 at ne = 1.3 × 1020 m-3, to 2.8 at ne = 9.3 × 1020 m-3. Its inclusion significantly improved agreement between experiment and modeling.

  15. SPRAI: coupling of radiative feedback and primordial chemistry in moving mesh hydrodynamics

    NASA Astrophysics Data System (ADS)

    Jaura, O.; Glover, S. C. O.; Klessen, R. S.; Paardekooper, J.-P.

    2018-04-01

    In this paper, we introduce a new radiative transfer code SPRAI (Simplex Photon Radiation in the Arepo Implementation) based on the SIMPLEX radiation transfer method. This method, originally used only for post-processing, is now directly integrated into the AREPO code and takes advantage of its adaptive unstructured mesh. Radiated photons are transferred from the sources through the series of Voronoi gas cells within a specific solid angle. From the photon attenuation, we derive corresponding photon fluxes and ionization rates and feed them to a primordial chemistry module. This gives us a self-consistent method for studying dynamical and chemical processes caused by ionizing sources in primordial gas. Since the computational cost of the SIMPLEX method does not scale directly with the number of sources, it is convenient for studying systems such as primordial star-forming haloes that may form multiple ionizing sources.

  16. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  17. Diffusive, Supersonic X-ray Transport in Foam Cylinders

    NASA Astrophysics Data System (ADS)

    Back, Christina A.

    1999-11-01

    Diffusive supersonic radiation transport, where the ratio of the diffusive radiation front velocity to the material sound speed >2 has been studied in a series of laboratory experiments on low density foams. This work is of interest for radiation transport in basic science and astrophysics. The Marshak radiation wave transport is studied for both low and high Z foam materials and for different length foams in a novel hohlraum geometry that allows direct comparisons with 2-dimensional analytic models and code simulations. The radiation wave is created by a ~ 80 eV near blackbody 12-ns long drive or a ~ 200 eV 1.2-2.4 ns long drive generated by laser-heated Au hohlraums. The targets are SiO2 and Ta2O5 aerogel foams of varying lengths which span 10 to 50 mg/cc densities. Clean signatures of radiation breakout were observed by radially resolved face-on transmission measurements of the radiation flux at a photon energy of 250 eV or 550 eV. The high quality data provides new detailed information on the importance of both the fill and wall material opacities and heat capacities in determining the radiation front speed and curvature. note number.

  18. Reanalysis of tritium production in a sphere of /sup 6/LiD irradiated by 14-MeV neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawcett, L.R. Jr.

    1985-08-01

    Tritium production and activation of radiochemical detector foils in a sphere of /sup 6/LiD irradiated by a central source of 14-MeV neutrons has been reanalyzed. The /sup 6/LiD sphere consisted of 10 solid hemispherical nested shells with ampules of /sup 6/LiH, /sup 7/LiH, and activation foils located 2.2, 5, 7.7, 12.6, 20, and 30 cm from the center. The Los Alamos Monte Carlo Neutron Photon Transport Code (MCNP) was used to calculate neutron transport through the /sup 6/LiD, tritium production in the ampules, and foil activation. The MCNP input model was three-dimensional and employed ENDF/B-V cross sections for transport, tritiummore » production, and (where available) foil activation. The reanalyzed experimentally observed-to-calculated values of tritium production were 1.053 +- 2.1% in /sup 6/LiH and 0.999 +- 2.1% in /sup 7/LiH. The recalculated foil activation observed-to-calculated ratios were not generally improved over those reported in the original analysis.« less

  19. The limited role of recombination energy in common envelope removal

    NASA Astrophysics Data System (ADS)

    Grichener, Aldana; Sabach, Efrat; Soker, Noam

    2018-05-01

    We calculate the outward energy transport time by convection and photon diffusion in an inflated common envelope and find this time to be shorter than the envelope expansion time. We conclude therefore that most of the hydrogen recombination energy ends in radiation rather than in kinetic energy of the outflowing envelope. We use the stellar evolution code MESA and inject energy inside the envelope of an asymptotic giant branch star to mimic energy deposition by a spiraling-in stellar companion. During 1.7 years the envelope expands by a factor of more than 2. Along the entire evolution the convection can carry the energy very efficiently outwards, to the radius where radiative transfer becomes more efficient. The total energy transport time stays within several months, shorter than the dynamical time of the envelope. Had we included rapid mass loss, as is expected in the common envelope evolution, the energy transport time would have been even shorter. It seems that calculations that assume that most of the recombination energy ends in the outflowing gas might be inaccurate.

  20. Transport of energy by ultraintense laser-generated electrons in nail-wire targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T.; Lawrence Livermore National Laboratory, Livermore, California 94550; Key, M. H.

    2009-11-15

    Nail-wire targets (20 {mu}m diameter copper wires with 80 {mu}m hemispherical head) were used to investigate energy transport by relativistic fast electrons generated in intense laser-plasma interactions. The targets were irradiated using the 300 J, 1 ps, and 2x10{sup 20} W{center_dot}cm{sup -2} Vulcan laser at the Rutherford Appleton Laboratory. A spherically bent crystal imager, a highly ordered pyrolytic graphite spectrometer, and single photon counting charge-coupled device gave absolute Cu K{alpha} measurements. Results show a concentration of energy deposition in the head and an approximately exponential fall-off along the wire with about 60 {mu}m 1/e decay length due to resistive inhibition.more » The coupling efficiency to the wire was 3.3{+-}1.7% with an average hot electron temperature of 620{+-}125 keV. Extreme ultraviolet images (68 and 256 eV) indicate additional heating of a thin surface layer of the wire. Modeling using the hybrid E-PLAS code has been compared with the experimental data, showing evidence of resistive heating, magnetic trapping, and surface transport.« less

  1. A new line-of-sight approach to the non-linear Cosmic Microwave Background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Koyama, Kazuya; Pettinari, Guido W., E-mail: christian.fidler@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: guido.pettinari@gmail.com

    2015-04-01

    We develop the transport operator formalism, a new line-of-sight integration framework to calculate the anisotropies of the Cosmic Microwave Background (CMB) at the linear and non-linear level. This formalism utilises a transformation operator that removes all inhomogeneous propagation effects acting on the photon distribution function, thus achieving a split between perturbative collisional effects at recombination and non-perturbative line-of-sight effects at later times. The former can be computed in the framework of standard cosmological perturbation theory with a second-order Boltzmann code such as SONG, while the latter can be treated within a separate perturbative scheme allowing the use of non-linear Newtonianmore » potentials. We thus provide a consistent framework to compute all physical effects contained in the Boltzmann equation and to combine the standard remapping approach with Boltzmann codes at any order in perturbation theory, without assuming that all sources are localised at recombination.« less

  2. Study of solid-conversion gaseous detector based on GEM for high energy X-ray industrial CT.

    PubMed

    Zhou, Rifeng; Zhou, Yaling

    2014-01-01

    The general gaseous ionization detectors are not suitable for high energy X-ray industrial computed tomography (HEICT) because of their inherent limitations, especially low detective efficiency and large volume. The goal of this study was to investigate a new type of gaseous detector to solve these problems. The novel detector was made by a metal foil as X-ray convertor to improve the conversion efficiency, and the Gas Electron Multiplier (hereinafter "GEM") was used as electron amplifier to lessen its volume. The detective mechanism and signal formation of the detector was discussed in detail. The conversion efficiency was calculated by using EGSnrc Monte Carlo code, and the transport course of photon and secondary electron avalanche in the detector was simulated with the Maxwell and Garfield codes. The result indicated that this detector has higher conversion efficiency as well as less volume. Theoretically this kind of detector could be a perfect candidate for replacing the conventional detector in HEICT.

  3. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    PubMed

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  4. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  5. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  6. High-speed, Low Voltage, Miniature Electro-optic Modulators Based on Hybrid Photonic-Crystal/Polymer/Sol-Gel Technology

    DTIC Science & Technology

    2012-02-01

    code) 01/02/2012 FINAL 15/11/2008 - 15/11/2011 High-speed, Low Voltage, Miniature Electro - optic Modulators Based on Hybrid Photonic-Crystal/Polymer... optic modulator, silicon photonics, integrated optics, electro - optic polymer, avionics, optical communications, sol-gel, nanotechnology U U U UU 25...2011 Program Manager: Dr. Charles Y-C Lee High-speed, Low Voltage, Miniature Electro - optic Modulators Based on Hybrid Photonic-Crystal/Polymer/Sol

  7. Design of laboratory experiments to study radiation-driven implosions

    DOE PAGES

    Keiter, P. A.; Trantham, M.; Malamud, G.; ...

    2017-02-03

    The interstellar medium is heterogeneous with dense clouds amid an ambient medium. Radiation from young OB stars asymmetrically irradiate the dense clouds. Bertoldi (1989) developed analytic formulae to describe possible outcomes of these clouds when irradiated by hot, young stars. One of the critical parameters that determines the cloud’s fate is the number of photon mean free paths in the cloud. For the extreme cases where the cloud size is either much greater than or much less than one mean free path, the radiation transport should be well understood. However, as one transitions between these limits, the radiation transport ismore » much more complex and is a challenge to solve with many of the current radiation transport models implemented in codes. In this paper, we present the design of laboratory experiments that use a thermal source of x-rays to asymmetrically irradiate a low-density plastic foam sphere. The experiment will vary the density and hence the number of mean free paths of the sphere to study the radiation transport in different regimes. Finally, we have developed dimensionless parameters to relate the laboratory experiment to the astrophysical system and we show that we can perform the experiment in the same transport regime.« less

  8. PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long

    2018-06-01

    We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.

  9. NARMER-1: a photon point-kernel code with build-up factors

    NASA Astrophysics Data System (ADS)

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  10. Time-dependent current into and through multilevel parallel quantum dots in a photon cavity

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Vidar; Abdullah, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei

    2017-05-01

    We analyze theoretically the charging current into, and the transport current through, a nanoscale two-dimensional electron system with two parallel quantum dots embedded in a short wire placed in a photon cavity. A plunger gate is used to place specific many-body states of the interacting system in the bias window defined by the external leads. We show how the transport phenomena active in the many-level complex central system strongly depend on the gate voltage. We identify a resonant transport through the central system as the two spin components of the one-electron ground state are in the bias window. This resonant transport through the lowest energy electron states seems to a large extent independent of the detuned photon field when judged from the transport current. This could be expected in the small bias regime, but an observation of the occupancy of the states of the system reveals that this picture is not entirely true. The current does not reflect slower photon-active internal transitions bringing the system into the steady state. The number of initially present photons determines when the system reaches the real steady state. With two-electron states in the bias window we observe a more complex situation with intermediate radiative and nonradiative relaxation channels leading to a steady state with a weak nonresonant current caused by inelastic tunneling through the two-electron ground state of the system. The presence of the radiative channels makes this phenomena dependent on the number of photons initially in the cavity.

  11. Monte Carlo simulation of photon buildup factors for shielding materials in diagnostic x-ray facilities.

    PubMed

    Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim

    2012-10-01

    A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.

  12. Monte Carlo simulation of photon buildup factors for shielding materials in diagnostic x-ray facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim

    2012-10-15

    Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenthmore » value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.« less

  13. Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions

    NASA Astrophysics Data System (ADS)

    Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.

    2018-05-01

    Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.

  14. Experimental validation of a coupled neutron-photon inverse radiation transport solver

    NASA Astrophysics Data System (ADS)

    Mattingly, John; Mitchell, Dean J.; Harding, Lee T.

    2011-10-01

    Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.

  15. Energetic properties' investigation of removing flattening filter at phantom surface: Monte Carlo study using BEAMnrc code, DOSXYZnrc code and BEAMDP code

    NASA Astrophysics Data System (ADS)

    Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad

    2017-11-01

    The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.

  16. Coupled-resonator waveguide perfect transport single-photon by interatomic dipole-dipole interaction

    NASA Astrophysics Data System (ADS)

    Yan, Guo-an; Lu, Hua; Qiao, Hao-xue; Chen, Ai-xi; Wu, Wan-qing

    2018-06-01

    We theoretically investigate single-photon coherent transport in a one-dimensional coupled-resonator waveguide coupled to two quantum emitters with dipole-dipole interactions. The numerical simulations demonstrate that the transmission spectrum of the photon depends on the two atoms dipole-dipole interactions and the photon-atom couplings. The dipole-dipole interactions may change the dip positions in the spectra and the coupling strength may broaden the frequency band width in the transmission spectrum. We further demonstrate that the typical transmission spectra split into two dips due to the dipole-dipole interactions. This phenomenon may be used to manufacture new quantum waveguide devices.

  17. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  18. Electroluminescence Caused by the Transport of Interacting Electrons through Parallel Quantum Dots in a Photon Cavity

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Vidar; Abdulla, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei

    2018-02-01

    We show that a Rabi-splitting of the states of strongly interacting electrons in parallel quantum dots embedded in a short quantum wire placed in a photon cavity can be produced by either the para- or the dia-magnetic electron-photon interactions when the geometry of the system is properly accounted for and the photon field is tuned close to a resonance with the electron system. We use these two resonances to explore the electroluminescence caused by the transport of electrons through the one- and two-electron ground states of the system and their corresponding conventional and vacuum electroluminescense as the central system is opened up by coupling it to external leads acting as electron reservoirs. Our analysis indicates that high-order electron-photon processes are necessary to adequately construct the cavity-photon dressed electron states needed to describe both types of electroluminescence.

  19. Modelling of radiation impact on ITER Beryllium wall

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Janeschitz, G.

    2009-04-01

    In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.

  20. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  1. Computational Assessment of Naturally Occurring Neutron and Photon Background Radiation Produced by Extraterrestrial Sources

    DOE PAGES

    Miller, Thomas Martin; de Wet, Wouter C.; Patton, Bruce W.

    2015-10-28

    In this study, a computational assessment of the variation in terrestrial neutron and photon background from extraterrestrial sources is presented. The motivation of this assessment is to evaluate the practicality of developing a tool or database to estimate background in real time (or near–real time) during an experimental measurement or to even predict the background for future measurements. The extraterrestrial source focused on during this assessment is naturally occurring galactic cosmic rays (GCRs). The MCNP6 transport code was used to perform the computational assessment. However, the GCR source available in MCNP6 was not used. Rather, models developed and maintained bymore » NASA were used to generate the GCR sources. The largest variation in both neutron and photon background spectra was found to be caused by changes in elevation on Earth's surface, which can be as large as an order of magnitude. All other perturbations produced background variations on the order of a factor of 3 or less. The most interesting finding was that ~80% and 50% of terrestrial background neutrons and photons, respectively, are generated by interactions in Earth's surface and other naturally occurring and man-made objects near a detector of particles from extraterrestrial sources and their progeny created in Earth's atmosphere. In conclusion, this assessment shows that it will be difficult to estimate the terrestrial background from extraterrestrial sources without a good understanding of a detector's surroundings. Therefore, estimating or predicting background during a measurement environment like a mobile random search will be difficult.« less

  2. Development of a web-based CT dose calculator: WAZA-ARI.

    PubMed

    Ban, N; Takahashi, F; Sato, K; Endo, A; Ono, K; Hasegawa, T; Yoshitake, T; Katsunuma, Y; Kai, M

    2011-09-01

    A web-based computed tomography (CT) dose calculation system (WAZA-ARI) is being developed based on the modern techniques for the radiation transport simulation and for software implementation. Dose coefficients were calculated in a voxel-type Japanese adult male phantom (JM phantom), using the Particle and Heavy Ion Transport code System. In the Monte Carlo simulation, the phantom was irradiated with a 5-mm-thick, fan-shaped photon beam rotating in a plane normal to the body axis. The dose coefficients were integrated into the system, which runs as Java servlets within Apache Tomcat. Output of WAZA-ARI for GE LightSpeed 16 was compared with the dose values calculated similarly using MIRD and ICRP Adult Male phantoms. There are some differences due to the phantom configuration, demonstrating the significance of the dose calculation with appropriate phantoms. While the dose coefficients are currently available only for limited CT scanner models and scanning options, WAZA-ARI will be a useful tool in clinical practice when development is finalised.

  3. HARD X-RAY ASYMMETRY LIMITS IN SOLAR FLARE CONJUGATE FOOTPOINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daou, Antoun G.; Alexander, David, E-mail: agdaou@rice.edu, E-mail: dalex@rice.edu

    2016-11-20

    The transport of energetic electrons in a solar flare is modeled using a time-dependent one-dimensional Fokker–Planck code that incorporates asymmetric magnetic convergence. We derive the temporal and spectral evolution of the resulting hard X-ray (HXR) emission in the conjugate chromospheric footpoints, assuming thick target photon production, and characterize the time evolution of the numerically simulated footpoint asymmetry and its relationship to the photospheric magnetic configuration. The thick target HXR asymmetry in the conjugate footpoints is found to increase with magnetic field ratio as expected. However, we find that the footpoint HXR asymmetry saturates for conjugate footpoint magnetic field ratios ≥4.more » This result is borne out in a direct comparison with observations of 44 double-footpoint flares. The presence of such a limit has not been reported before, and may serve as both a theoretical and observational benchmark for testing a range of particle transport and flare morphology constraints, particularly as a means to differentiate between isotropic and anisotropic particle injection.« less

  4. Analysis of tritium production in concentric spheres of oralloy and /sup 6/LiD irradiated by 14-MeV neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawcett, L.R. Jr.; Roberts, R.R. II; Hunter, R.E.

    1988-03-01

    Tritium production and activation of radiochemical detector foils in a sphere of /sup 6/LiD with an oralloy core irradiated by a central source of 14-MeV neutrons have been calculated and compared with experimental measurements. The experimental assembly consisted of an oralloy sphere surrounded by three solid /sup 6/LiD concentric shells with ampules of /sup 6/LiH and /sup 7/LiH and activation foils located in several positions throughout the assembly. The Los Alamos Monte Carlo Neutron Photon Transport Code (MCNP) was used to calculate neutron transport throughout the system, tritium production in the ampules, and foil activation. The overall experimentally observed-to-calculated ratiosmore » of tritium production were 0.996 +- 2.5% in /sup 6/Li ampules and 0.903 +- 5.2% in /sup 7/Li ampules. Observed-to-calculated ratios for foil activation are also presented. 11 refs., 4 figs., 7 tabs.« less

  5. Simulation with EGS4 code of external beam of radiotherapy apparatus with workstation and PC gives similar results?

    PubMed

    Malataras, G; Kappas, C; Lovelock, D M; Mohan, R

    1997-01-01

    This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.

  6. Numerical Analysis of Organ Doses Delivered During Computed Tomography Examinations Using Japanese Adult Phantoms with the WAZA-ARI Dosimetry System.

    PubMed

    Takahashi, Fumiaki; Sato, Kaoru; Endo, Akira; Ono, Koji; Ban, Nobuhiko; Hasegawa, Takayuki; Katsunuma, Yasushi; Yoshitake, Takayasu; Kai, Michiaki

    2015-08-01

    A dosimetry system for computed tomography (CT) examinations, named WAZA-ARI, is being developed to accurately assess radiation doses to patients in Japan. For dose calculations in WAZA-ARI, organ doses were numerically analyzed using average adult Japanese male (JM) and female (JF) phantoms with the Particle and Heavy Ion Transport code System (PHITS). Experimental studies clarified the photon energy distribution of emitted photons and dose profiles on the table for some multi-detector row CT (MDCT) devices. Numerical analyses using a source model in PHITS could specifically take into account emissions of x rays from the tube to the table with attenuation of photons through a beam-shaping filter for each MDCT device based on the experiment results. The source model was validated by measuring the CT dose index (CTDI). Numerical analyses with PHITS revealed a concordance of organ doses with body sizes of the JM and JF phantoms. The organ doses in the JM phantoms were compared with data obtained using previously developed systems. In addition, the dose calculations in WAZA-ARI were verified with previously reported results by realistic NUBAS phantoms and radiation dose measurement using a physical Japanese model (THRA1 phantom). The results imply that numerical analyses using the Japanese phantoms and specified source models can give reasonable estimates of dose for MDCT devices for typical Japanese adults.

  7. A test of the IAEA code of practice for absorbed dose determination in photon and electron beams

    NASA Astrophysics Data System (ADS)

    Leitner, Arnold; Tiefenboeck, Wilhelm; Witzani, Josef; Strachotinsky, Christian

    1990-12-01

    The IAEA (International Atomic Energy Agency) code of practice TRS 277 gives recommendations for absorbed dose determination in high energy photon and electron beams based on the use of ionization chambers calibrated in terms of exposure of air kerma. The scope of the work was to test the code for cobalt 60 gamma radiation and for several radiation qualities at four different types of electron accelerators and to compare the ionization chamber dosimetry with ferrous sulphate dosimetry. The results show agreement between the two methods within about one per cent for all the investigated qualities. In addition the response of the TLD capsules of the IAEA/WHO TL dosimetry service was determined.

  8. Asymmetric transmission and optical low-pass filtering in a stack of random media with graded transport mean free path

    NASA Astrophysics Data System (ADS)

    Bingi, J.; Hemalatha, M.; Anita, R. W.; Vijayan, C.; Murukeshan, V. M.

    2015-11-01

    Light transport and the physical phenomena related to light propagation in random media are very intriguing, they also provide scope for new paradigms of device functionality, most of which remain unexplored. Here we demonstrate, experimentally and by simulation, a novel kind of asymmetric light transmission (diffusion) in a stack of random media (SRM) with graded transport mean free path. The structure is studied in terms of transmission, of photons propagated through and photons generated within the SRM. It is observed that the SRM exhibits asymmetric transmission property with a transmission contrast of 0.25. In addition, it is shown that the SRM works as a perfect optical low-pass filter with a well-defined cutoff wavelength at 580 nm. Further, the photons generated within the SRM found to exhibit functionality similar to an optical diode with a transmission contrast of 0.62. The basis of this functionality is explained in terms of wavelength dependent photon randomization and the graded transport mean free path of SRM.

  9. Quantum nonlinear optics without photons

    NASA Astrophysics Data System (ADS)

    Stassi, Roberto; Macrı, Vincenzo; Kockum, Anton Frisk; Di Stefano, Omar; Miranowicz, Adam; Savasta, Salvatore; Nori, Franco

    2017-08-01

    Spontaneous parametric down-conversion is a well-known process in quantum nonlinear optics in which a photon incident on a nonlinear crystal spontaneously splits into two photons. Here we propose an analogous physical process where one excited atom directly transfers its excitation to a pair of spatially separated atoms with probability approaching 1. The interaction is mediated by the exchange of virtual rather than real photons. This nonlinear atomic process is coherent and reversible, so the pair of excited atoms can transfer the excitation back to the first one: the atomic analog of sum-frequency generation of light. The parameters used to investigate this process correspond to experimentally demonstrated values in ultrastrong circuit quantum electrodynamics. This approach can be extended to realize other nonlinear interatomic processes, such as four-atom mixing, and is an attractive architecture for the realization of quantum devices on a chip. We show that four-qubit mixing can efficiently implement quantum repetition codes and, thus, can be used for error-correction codes.

  10. Description of Transport Codes for Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.

    2011-01-01

    This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.

  11. Valley photonic crystals for control of spin and topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Jian-Wen; Chen, Xiao-Dong; Zhu, Hanyu

    2016-11-28

    Photonic crystals offer unprecedented opportunity for light manipulation and applications in optical communication and sensing1,2,3,4. Exploration of topology in photonic crystals and metamaterials with non-zero gauge field has inspired a number of intriguing optical phenomena such as one-way transport and Weyl points5,6,7,8,9,10. Recently, a new degree of freedom, valley, has been demonstrated in two-dimensional materials11,12,13,14,15. Here, we propose a concept of valley photonic crystals with electromagnetic duality symmetry but broken inversion symmetry. We observe photonic valley Hall effect originating from valley-dependent spin-split bulk bands, even in topologically trivial photonic crystals. Valley–spin locking behaviour results in selective net spin flow insidemore » bulk valley photonic crystals. We also show the independent control of valley and topology in a single system that has been long pursued in electronic systems, resulting in topologically-protected flat edge states. Valley photonic crystals not only offer a route towards the observation of non-trivial states, but also open the way for device applications in integrated photonics and information processing using spin-dependent transportation.« less

  12. Update and evaluation of decay data for spent nuclear fuel analyses

    NASA Astrophysics Data System (ADS)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  13. Photo-Seebeck effect in tetragonal PbO single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondal, P. S.; Okazaki, R.; Taniguchi, H.

    2013-11-07

    We report the observation of photo-Seebeck effect in tetragonal PbO crystals. The photo-induced carriers contribute to the transport phenomena, and consequently the electrical conductivity increases and the Seebeck coefficient decreases with increasing photon flux density. A parallel-circuit model is used to evaluate the actual contributions of photo-excited carriers from the measured transport data. The photo-induced carrier concentration estimated from the Seebeck coefficient increases almost linearly with increasing photon flux density, indicating a successful photo-doping effect on the thermoelectric property. The mobility decreases by illumination but the reduction rate strongly depends on the illuminated photon energy. Possible mechanisms of such photon-energy-dependentmore » mobility are discussed.« less

  14. Activation product transport in fusion reactors. [RAPTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, A.C.

    1983-01-01

    Activated corrosion and neutron sputtering products will enter the coolant and/or tritium breeding material of fusion reactor power plants and experiments and cause personnel access problems. Radiation levels around plant components due to these products will cause difficulties with maintenance and repair operations throughout the plant. Similar problems are experienced around fission reactor systems. The determination of the transport of radioactive corrosion and neutron sputtering products through the system is achieved using the computer code RAPTOR. This code calculates the mass transfer of a number of activation products based on the corrosion and sputtering rates through the system, the depositionmore » and release characteristics of various plant components, the neturon flux spectrum, as well as other plant parameters. RAPTOR assembles a system of first order linear differential equations into a matrix equation based upon the reactor system parameters. Included in the transfer matrix are the deposition and erosion coefficients, and the decay and activation data for the various plant nodes and radioactive isotopes. A source vector supplies the corrosion and neutron sputtering source rates. This matrix equation is then solved using a matrix operator technique to give the specific activity distribution of each radioactive species throughout the plant. Once the amount of mass transfer is determined, the photon transport due to the radioactive corrosion and sputtering product sources can be evaluated, and dose rates around the plant components of interest as a function of time can be determined. This method has been used to estimate the radiation hazards around a number of fusion reactor system designs.« less

  15. Performance and structure of single-mode bosonic codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    2018-03-01

    The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.

  16. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  17. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  18. MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias

    2014-08-01

    Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.

  19. A velocity-dependent anomalous radial transport model for (2-D, 2-V) kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, Kowsik; Krasheninnikov, Sergei; Cohen, Ron; Rognlien, Tom

    2008-11-01

    Plasma turbulence constitutes a significant part of radial plasma transport in magnetically confined plasmas. This turbulent transport is modeled in the form of anomalous convection and diffusion coefficients in fluid transport codes. There is a need to model the same in continuum kinetic edge codes [such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory] with non-Maxwellian distributions. We present an anomalous transport model with velocity-dependent convection and diffusion coefficients leading to a diagonal transport matrix similar to that used in contemporary fluid transport models (e.g., UEDGE). Also presented are results of simulations corresponding to radial transport due to long-wavelength ExB turbulence using a velocity-independent diffusion coefficient. A BGK collision model is used to enable comparison with fluid transport codes.

  20. Determination of absorbed dose to water for high-energy photon and electron beams-comparison of the standards DIN 6800-2 (1997), IAEA TRS 398 (2000) and DIN 6800-2 (2006)

    PubMed Central

    Zakaria, Golam Abu; Schuette, Wilhelm

    2007-01-01

    For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit. PMID:21217912

  1. Determination of absorbed dose to water for high-energy photon and electron beams-comparison of the standards DIN 6800-2 (1997), IAEA TRS 398 (2000) and DIN 6800-2 (2006).

    PubMed

    Zakaria, Golam Abu; Schuette, Wilhelm

    2007-01-01

    For the determination of the absorbed dose to water for high-energy photon and electron beams the IAEA code of practice TRS-398 (2000) is applied internationally. In Germany, the German dosimetry protocol DIN 6800-2 (1997) is used. Recently, the DIN standard has been revised and published as Draft National Standard DIN 6800-2 (2006). It has adopted widely the methodology and dosimetric data of the code of practice. This paper compares these three dosimetry protocols systematically and identifies similarities as well as differences. The investigation was done with 6 and 18 MV photon as well as 5 to 21 MeV electron beams. While only cylindrical chambers were used for photon beams, measurements of electron beams were performed using cylindrical as well as plane-parallel chambers. The discrepancies in the determination of absorbed dose to water between the three protocols were 0.4% for photon beams and 1.5% for electron beams. Comparative measurements showed a deviation of less than 0.5% between our measurements following protocol DIN 6800-2 (2006) and TLD inter-comparison procedure in an external audit.

  2. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code

    NASA Astrophysics Data System (ADS)

    Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-01

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  3. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.

    PubMed

    Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

    2007-01-07

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.

  4. Multi-channel photon counting DOT system based on digital lock-in detection technique

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng

    2011-02-01

    Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.

  5. Efficient transportation of nano-sized particles along slotted photonic crystal waveguide.

    PubMed

    Lin, Pin-Tso; Lee, Po-Tsung

    2012-01-30

    We design a slotted photonic crystal waveguide (S-PhCW) and numerically propose that it can efficiently transport polystyrene particle with diameter as small as 50 nm in a 100 nm slot. Excellent optical confinement and slow light effect provided by the photonic crystal structure greatly enhance the optical force exerted on the particle. The S-PhCW can thus transport the particle with optical propulsion force as strong as 5.3 pN/W, which is over 10 times stronger than that generated by the slotted strip waveguide (S-SW). In addition, the vertical optical attraction force induced in the S-PhCW is over 2 times stronger than that of the S-SW. Therefore, the S-PhCW transports particles not only efficiently but also stably. We anticipate this waveguide structure will be beneficial for the future lab-on-chip development.

  6. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrorsmore » and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.« less

  7. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  8. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  9. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  10. Radiation transport around Kerr black holes

    NASA Astrophysics Data System (ADS)

    Schnittman, Jeremy David

    This Thesis describes the basic framework of a relativistic ray-tracing code for analyzing accretion processes around Kerr black holes. We begin in Chapter 1 with a brief historical summary of the major advances in black hole astrophysics over the past few decades. In Chapter 2 we present a detailed description of the ray-tracing code, which can be used to calculate the transfer function between the plane of the accretion disk and the detector plane, an important tool for modeling relativistically broadened emission lines. Observations from the Rossi X-Ray Timing Explorer have shown the existence of high frequency quasi-periodic oscillations (HFQPOs) in a number of black hole binary systems. In Chapter 3, we employ a simple "hot spot" model to explain the position and amplitude of these HFQPO peaks. The power spectrum of the periodic X-ray light curve consists of multiple peaks located at integral combinations of the black hole coordinate frequencies, with the relative amplitude of each peak determined by the orbital inclination, eccentricity, and hot spot arc length. In Chapter 4, we introduce additional features to the model to explain the broadening of the QPO peaks as well as the damping of higher frequency harmonics in the power spectrum. The complete model is used to fit the power spectra observed in XTE J1550-564, giving confidence limits on each of the model parameters. In Chapter 5 we present a description of the structure of a relativistic alpha- disk around a Kerr black hole. Given the surface temperature of the disk, the observed spectrum is calculated using the transfer function mentioned above. The features of this modified thermal spectrum may be used to infer the physical properties of the accretion disk and the central black hole. In Chapter 6 we develop a Monte Carlo code to calculate the detailed propagation of photons from a hot spot emitter scattering through a corona surrounding the black hole. The coronal scattering has two major observable effects: the inverse-Compton process alters the photon spectrum by adding a high energy power-law tail, and the random scattering of each photon effectively damps out the highest frequency modulations in the X-ray light curve. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617- 253-5668; Fax 617-253-1690.)

  11. Simulation of ultra-high energy photon propagation in the geomagnetic field

    NASA Astrophysics Data System (ADS)

    Homola, P.; Góra, D.; Heck, D.; Klages, H.; PeĶala, J.; Risse, M.; Wilczyńska, B.; Wilczyński, H.

    2005-12-01

    The identification of primary photons or specifying stringent limits on the photon flux is of major importance for understanding the origin of ultra-high energy (UHE) cosmic rays. UHE photons can initiate particle cascades in the geomagnetic field, which leads to significant changes in the subsequent atmospheric shower development. We present a Monte Carlo program allowing detailed studies of conversion and cascading of UHE photons in the geomagnetic field. The program named PRESHOWER can be used both as an independent tool or together with a shower simulation code. With the stand-alone version of the code it is possible to investigate various properties of the particle cascade induced by UHE photons interacting in the Earth's magnetic field before entering the Earth's atmosphere. Combining this program with an extensive air shower simulation code such as CORSIKA offers the possibility of investigating signatures of photon-initiated showers. In particular, features can be studied that help to discern such showers from the ones induced by hadrons. As an illustration, calculations for the conditions of the southern part of the Pierre Auger Observatory are presented. Catalogue identifier:ADWG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG Program obtainable: CPC Program Library, Quen's University of Belfast, N. Ireland Computer on which the program has been thoroughly tested:Intel-Pentium based PC Operating system:Linux, DEC-Unix Programming language used:C, FORTRAN 77 Memory required to execute with typical data:<100 kB No. of bits in a word:32 Has the code been vectorized?:no Number of lines in distributed program, including test data, etc.:2567 Number of bytes in distributed program, including test data, etc.:25 690 Distribution format:tar.gz Other procedures used in PRESHOWER:IGRF [N.A. Tsyganenko, National Space Science Data Center, NASA GSFC, Greenbelt, MD 20771, USA, http://nssdc.gsfc.nasa.gov/space/model/magnetos/data-based/geopack.html], bessik, ran2 [Numerical Recipes, http://www.nr.com]. Nature of the physical problem:Simulation of a cascade of particles initiated by UHE photon passing through the geomagnetic field above the Earth's atmosphere. Method of solution: The primary photon is tracked until its conversion into ee pair or until it reaches the upper atmosphere. If conversion occurred each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). The procedure ends at the top of atmosphere and the shower particle data are saved. Restrictions on the complexity of the problem: Gamma conversion into particles other than electron pair has not been taken into account. Typical running time: 100 preshower events with primary energy 10 eV require a 800 MHz CPU time of about 50 min, with 10 eV the simulation time for 100 events grows up to 500 min.

  12. Secured Optical Communications Using Quantum Entangled Two-Photon Transparency Modulation

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet (Inventor); Kojima, Jun (Inventor); Lekki, John (Inventor)

    2015-01-01

    A system and method is disclosed wherein optical signals are coded in a transmitter by tuning or modulating the interbeam delay time (which modulates the fourth-order coherence) between pairs of entangled photons. The photon pairs are either absorbed or not absorbed (transparent) by an atomic or molecular fluorescer in a receiver, depending on the inter-beam delay that is introduced in the entangled photon pairs. Upon the absorption, corresponding fluorescent optical emissions follow at a certain wavelength, which are then detected by a photon detector. The advantage of the disclosed system is that it eliminates a need of a coincidence counter to realize the entanglement-based secure optical communications because the absorber acts as a coincidence counter for entangled photon pairs.

  13. Relativistic theory of particles in a scattering flow III: photon transport.

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Norman, C. A.

    2018-06-01

    We use the theory developed in Achterberg & Norman (2018a) and Achterberg & Norman (2018b) to calculate the stress due to photons that are scattered elastically by a relativistic flow. We show that the energy-momentum tensor of the radiation takes the form proposed by Eckart (1940). In particular we show that no terms associated with a bulk viscosity appear if one makes the diffusion approximation for radiation transport and treats the radiation as a separate fluid. We find only shear (dynamic) viscosity terms and heat flow terms in our expression for the energy-momentum tensor. This conclusion holds quite generally for different forms of scattering: Krook-type integral scattering, diffusive (Fokker-Planck) scattering and Thomson scattering. We also derive the transport equation in the diffusion approximation that shows the effects of the flow on the photon gas in the form of a combination of adiabatic heating and an irreversible heating term. We find no diffusive changes to the comoving number density and energy density of the scattered photons, in contrast with some published results in Radiation Hydrodynamics. It is demonstrated that these diffusive corrections to the number- and energy density of the photons are in fact higher-order terms that can (and should) be neglected in the diffusion approximation. Our approach eliminates these terms at the root of the expansion that yields the anisotropic terms in the phase-space density of particles and photons, the terms responsible for the photon viscosity.

  14. Asymmetric photon transport in organic semiconductor nanowires through electrically controlled exciton diffusion

    PubMed Central

    Cui, Qiu Hong; Peng, Qian; Luo, Yi; Jiang, Yuqian; Yan, Yongli; Wei, Cong; Shuai, Zhigang; Sun, Cheng; Yao, Jiannian; Zhao, Yong Sheng

    2018-01-01

    The ability to steer the flow of light toward desired propagation directions is critically important for the realization of key functionalities in optical communication and information processing. Although various schemes have been proposed for this purpose, the lack of capability to incorporate an external electric field to effectively tune the light propagation has severely limited the on-chip integration of photonics and electronics. Because of the noninteractive nature of photons, it is only possible to electrically control the flow of light by modifying the refractive index of materials through the electro-optic effect. However, the weak optical effects need to be strongly amplified for practical applications in high-density photonic integrations. We show a new strategy that takes advantage of the strong exciton-photon coupling in active waveguides to effectively manipulate photon transport by controlling the interaction between excitons and the external electric field. Single-crystal organic semiconductor nanowires were used to generate highly stable Frenkel exciton polaritons with strong binding and diffusion abilities. By making use of directional exciton diffusion in an external electric field, we have realized an electrically driven asymmetric photon transport and thus directional light propagation in a single nanowire. With this new concept, we constructed a dual-output single wire–based device to build an electrically controlled single-pole double-throw optical switch with fast temporal response and high switching frequency. Our findings may lead to the innovation of concepts and device architectures for optical information processing. PMID:29556529

  15. A study of photon propagation in free-space based on hybrid radiosity-radiance theorem.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Liang, Jimin; Wang, Lin; Yang, Da'an; Garofalakis, Anikitos; Ripoll, Jorge; Tian, Jie

    2009-08-31

    Noncontact optical imaging has attracted increasing attention in recent years due to its significant advantages on detection sensitivity, spatial resolution, image quality and system simplicity compared with contact measurement. However, photon transport simulation in free-space is still an extremely challenging topic for the complexity of the optical system. For this purpose, this paper proposes an analytical model for photon propagation in free-space based on hybrid radiosity-radiance theorem (HRRT). It combines Lambert's cosine law and the radiance theorem to handle the influence of the complicated lens and to simplify the photon transport process in the optical system. The performance of the proposed model is evaluated and validated with numerical simulations and physical experiments. Qualitative comparison results of flux distribution at the detector are presented. In particular, error analysis demonstrates the feasibility and potential of the proposed model for simulating photon propagation in free-space.

  16. Suppression of population transport and control of exciton distributions by entangled photons

    PubMed Central

    Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul

    2013-01-01

    Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty. PMID:23653194

  17. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  18. Deterministic and robust generation of single photons from a single quantum dot with 99.5% indistinguishability using adiabatic rapid passage.

    PubMed

    Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2014-11-12

    Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.

  19. SU-F-I-53: Coded Aperture Coherent Scatter Spectral Imaging of the Breast: A Monte Carlo Evaluation of Absorbed Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Lakshmanan, M; Fong, G

    Purpose: Coherent scatter based imaging has shown improved contrast and molecular specificity over conventional digital mammography however the biological risks have not been quantified due to a lack of accurate information on absorbed dose. This study intends to characterize the dose distribution and average glandular dose from coded aperture coherent scatter spectral imaging of the breast. The dose deposited in the breast from this new diagnostic imaging modality has not yet been quantitatively evaluated. Here, various digitized anthropomorphic phantoms are tested in a Monte Carlo simulation to evaluate the absorbed dose distribution and average glandular dose using clinically feasible scanmore » protocols. Methods: Geant4 Monte Carlo radiation transport simulation software is used to replicate the coded aperture coherent scatter spectral imaging system. Energy sensitive, photon counting detectors are used to characterize the x-ray beam spectra for various imaging protocols. This input spectra is cross-validated with the results from XSPECT, a commercially available application that yields x-ray tube specific spectra for the operating parameters employed. XSPECT is also used to determine the appropriate number of photons emitted per mAs of tube current at a given kVp tube potential. With the implementation of the XCAT digital anthropomorphic breast phantom library, a variety of breast sizes with differing anatomical structure are evaluated. Simulations were performed with and without compression of the breast for dose comparison. Results: Through the Monte Carlo evaluation of a diverse population of breast types imaged under real-world scan conditions, a clinically relevant average glandular dose for this new imaging modality is extrapolated. Conclusion: With access to the physical coherent scatter imaging system used in the simulation, the results of this Monte Carlo study may be used to directly influence the future development of the modality to keep breast dose to a minimum while still maintaining clinically viable image quality.« less

  20. Too Hot for Photon-Assisted Transport: Hot-Electrons Dominate Conductance Enhancement in Illuminated Single-Molecule Junctions.

    PubMed

    Fung, E-Dean; Adak, Olgun; Lovat, Giacomo; Scarabelli, Diego; Venkataraman, Latha

    2017-02-08

    We investigate light-induced conductance enhancement in single-molecule junctions via photon-assisted transport and hot-electron transport. Using 4,4'-bipyridine bound to Au electrodes as a prototypical single-molecule junction, we report a 20-40% enhancement in conductance under illumination with 980 nm wavelength radiation. We probe the effects of subtle changes in the transmission function on light-enhanced current and show that discrete variations in the binding geometry result in a 10% change in enhancement. Importantly, we prove theoretically that the steady-state behavior of photon-assisted transport and hot-electron transport is identical but that hot-electron transport is the dominant mechanism for optically induced conductance enhancement in single-molecule junctions when the wavelength used is absorbed by the electrodes and the hot-electron relaxation time is long. We confirm this experimentally by performing polarization-dependent conductance measurements of illuminated 4,4'-bipyridine junctions. Finally, we perform lock-in type measurements of optical current and conclude that currents due to laser-induced thermal expansion mask optical currents. This work provides a robust experimental framework for studying mechanisms of light-enhanced transport in single-molecule junctions and offers tools for tuning the performance of organic optoelectronic devices by analyzing detailed transport properties of the molecules involved.

  1. Organ and effective dose coefficients for cranial and caudal irradiation geometries: photons

    DOE PAGES

    Veinot, K. G.; Eckerman, K. F.; Hertel, N. E.

    2015-05-02

    With the introduction of new recommendations of the International Commission on Radiological Protection (ICRP) in Publication 103, the methodology for determining the protection quantity, effective dose, has been modified. The modifications include changes to the defined organs and tissues, the associated tissue weighting factors, radiation weighting factors and the introduction of reference sex-specific computational phantoms. Computations of equivalent doses in organs and tissues are now performed in both the male and female phantoms and the sex-averaged values used to determine the effective dose. Dose coefficients based on the ICRP 103 recommendations were reported in ICRP Publication 116, the revision ofmore » ICRP Publication 74 and ICRU Publication 57. The coefficients were determined for the following irradiation geometries: anterior-posterior (AP), posterior-anterior (PA), right and left lateral (RLAT and LLAT), rotational (ROT) and isotropic (ISO). In this work, the methodology of ICRP Publication 116 was used to compute dose coefficients for photon irradiation of the body with parallel beams directed upward from below the feet (caudal) and directed downward from above the head (cranial). These geometries may be encountered in the workplace from personnel standing on contaminated surfaces or volumes and from overhead sources. Calculations of organ and tissue kerma and absorbed doses for caudal and cranial exposures to photons ranging in energy from 10 keV to 10 GeV have been performed using the MCNP6.1 radiation transport code and the adult reference phantoms of ICRP Publication 110. As with calculations reported in ICRP 116, the effects of charged-particle transport are evident when compared with values obtained by using the kerma approximation. At lower energies the effective dose per particle fluence for cranial and caudal exposures is less than AP orientations while above similar to 30 MeV the cranial and caudal values are greater.« less

  2. Organ and effective dose coefficients for cranial and caudal irradiation geometries: photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, K. G.; Eckerman, K. F.; Hertel, N. E.

    With the introduction of new recommendations of the International Commission on Radiological Protection (ICRP) in Publication 103, the methodology for determining the protection quantity, effective dose, has been modified. The modifications include changes to the defined organs and tissues, the associated tissue weighting factors, radiation weighting factors and the introduction of reference sex-specific computational phantoms. Computations of equivalent doses in organs and tissues are now performed in both the male and female phantoms and the sex-averaged values used to determine the effective dose. Dose coefficients based on the ICRP 103 recommendations were reported in ICRP Publication 116, the revision ofmore » ICRP Publication 74 and ICRU Publication 57. The coefficients were determined for the following irradiation geometries: anterior-posterior (AP), posterior-anterior (PA), right and left lateral (RLAT and LLAT), rotational (ROT) and isotropic (ISO). In this work, the methodology of ICRP Publication 116 was used to compute dose coefficients for photon irradiation of the body with parallel beams directed upward from below the feet (caudal) and directed downward from above the head (cranial). These geometries may be encountered in the workplace from personnel standing on contaminated surfaces or volumes and from overhead sources. Calculations of organ and tissue kerma and absorbed doses for caudal and cranial exposures to photons ranging in energy from 10 keV to 10 GeV have been performed using the MCNP6.1 radiation transport code and the adult reference phantoms of ICRP Publication 110. As with calculations reported in ICRP 116, the effects of charged-particle transport are evident when compared with values obtained by using the kerma approximation. At lower energies the effective dose per particle fluence for cranial and caudal exposures is less than AP orientations while above similar to 30 MeV the cranial and caudal values are greater.« less

  3. Coupled particle-in-cell and Monte Carlo transport modeling of intense radiographic sources

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Welch, D. R.; Oliver, B. V.; Clark, R. E.; Johnson, D. L.; Maenchen, J. E.; Menge, P. R.; Olson, C. L.; Rovang, D. C.

    2002-03-01

    Dose-rate calculations for intense electron-beam diodes using particle-in-cell (PIC) simulations along with Monte Carlo electron/photon transport calculations are presented. The electromagnetic PIC simulations are used to model the dynamic operation of the rod-pinch and immersed-B diodes. These simulations include algorithms for tracking electron scattering and energy loss in dense materials. The positions and momenta of photons created in these materials are recorded and separate Monte Carlo calculations are used to transport the photons to determine the dose in far-field detectors. These combined calculations are used to determine radiographer equations (dose scaling as a function of diode current and voltage) that are compared directly with measured dose rates obtained on the SABRE generator at Sandia National Laboratories.

  4. Measurement and interpretation of electron angle at MABE beam stop

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Coleman, P. D.; Poukey, J. W.

    1985-02-01

    The mean angle of incidence at the beam stop of a 60 kA, 7 MV annular electron beam, in the 20 kG guide field of the MABE accelerator, was determined. Radiation dose measured in TLD arrays mounted downstream of the stop is compared with the radiation dose expected using a CYLTRAN Monte Carlo simulation of the electron/photon transport in the stop as a function of incident angles and energies. All radiation profiles measured are well fit, if the electrons are assumed to be incident with a polar angle theta of 15(0) + or - 2(0). A comparison of theta with that expected from the Adler-Miller model, and a MAGIC code simulation of beam behavior at the stop enables the mean transverse beam velocity to be estimated.

  5. Absolute Calibration of Image Plate for electrons at energy between 100 keV and 4 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Back, N L; Eder, D C

    2007-12-10

    The authors measured the absolute response of image plate (Fuji BAS SR2040) for electrons at energies between 100 keV to 4 MeV using an electron spectrometer. The electron source was produced from a short pulse laser irradiated on the solid density targets. This paper presents the calibration results of image plate Photon Stimulated Luminescence PSL per electrons at this energy range. The Monte Carlo radiation transport code MCNPX results are also presented for three representative incident angles onto the image plates and corresponding electron energies depositions at these angles. These provide a complete set of tools that allows extraction ofmore » the absolute calibration to other spectrometer setting at this electron energy range.« less

  6. Measurement of electron angle at MABE beam stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanford, T.W.L.; Coleman, P.D.; Poukey, J.W.

    1984-01-01

    The mean angle of incidence at the beam stop of a 60 KA, 7 MV annular electron beam, in the 20 kg guide field of the MABE accelerator, is determined. Radiation measured in TLD arrays mounted downstream of the stop is compared with the radiation expected using a CYLTRAN Monte Carlo simulation of the electron/photon transport in the stop as a function of incident angles and energies. All radiation profiles measured are well fit, if the electrons are assumed to be incident with a polar angle theta of 15/sup 0/ +- 2/sup 0/. Comparing theta with that expected from themore » Adler-Miller model, and a MAGIC code simulation of beam behavior at the stop enables the mean transverse beam velocity to be estimated.« less

  7. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  8. Lung Dosimetry for Radioiodine Treatment Planning in the Case of Diffuse Lung Metastases

    PubMed Central

    Song, Hong; He, Bin; Prideaux, Andrew; Du, Yong; Frey, Eric; Kasecamp, Wayne; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George

    2010-01-01

    The lungs are the most frequent sites of distant metastasis in differentiated thyroid carcinoma. Radioiodine treatment planning for these patients is usually performed following the Benua– Leeper method, which constrains the administered activity to 2.96 GBq (80 mCi) whole-body retention at 48 h after administration to prevent lung toxicity in the presence of iodine-avid lung metastases. This limit was derived from clinical experience, and a dosimetric analysis of lung and tumor absorbed dose would be useful to understand the implications of this limit on toxicity and tumor control. Because of highly nonuniform lung density and composition as well as the nonuniform activity distribution when the lungs contain tumor nodules, Monte Carlo dosimetry is required to estimate tumor and normal lung absorbed dose. Reassessment of this toxicity limit is also appropriate in light of the contemporary use of recombinant thyrotropin (thyroid-stimulating hormone) (rTSH) to prepare patients for radioiodine therapy. In this work we demonstrated the use of MCNP, a Monte Carlo electron and photon transport code, in a 3-dimensional (3D) imaging–based absorbed dose calculation for tumor and normal lungs. Methods A pediatric thyroid cancer patient with diffuse lung metastases was administered 37MBq of 131I after preparation with rTSH. SPECT/CT scans were performed over the chest at 27, 74, and 147 h after tracer administration. The time–activity curve for 131I in the lungs was derived from the whole-body planar imaging and compared with that obtained from the quantitative SPECT methods. Reconstructed and coregistered SPECT/CT images were converted into 3D density and activity probability maps suitable for MCNP4b input. Absorbed dose maps were calculated using electron and photon transport in MCNP4b. Administered activity was estimated on the basis of the maximum tolerated dose (MTD) of 27.25 Gy to the normal lungs. Computational efficiency of the MCNP4b code was studied with a simple segmentation approach. In addition, the Benua–Leeper method was used to estimate the recommended administered activity. The standard dosing plan was modified to account for the weight of this pediatric patient, where the 2.96-GBq (80 mCi) whole-body retention was scaled to 2.44 GBq (66 mCi) to give the same dose rate of 43.6 rad/h in the lungs at 48 h. Results Using the MCNP4b code, both the spatial dose distribution and a dose–volume histogram were obtained for the lungs. An administered activity of 1.72 GBq (46.4 mCi) delivered the putative MTD of 27.25 Gy to the lungs with a tumor absorbed dose of 63.7 Gy. Directly applying the Benua–Leeper method, an administered activity of 3.89 GBq (105.0 mCi) was obtained, resulting in tumor and lung absorbed doses of 144.2 and 61.6 Gy, respectively, when the MCNP-based dosimetry was applied. The voxel-by-voxel calculation time of 4,642.3 h for photon transport was reduced to 16.8 h when the activity maps were segmented into 20 regions. Conclusion MCNP4b–based, patient-specific 3D dosimetry is feasible and important in the dosimetry of thyroid cancer patients with avid lung metastases that exhibit prolonged retention in the lungs. PMID:17138741

  9. Charged particle transport in magnetic fields in EGSnrc.

    PubMed

    Malkov, V N; Rogers, D W O

    2016-07-01

    To accurately and efficiently implement charged particle transport in a magnetic field in EGSnrc and validate the code for the use in phantom and ion chamber simulations. The effect of the magnetic field on the particle motion and position is determined using one- and three-point numerical integrations of the Lorentz force on the charged particle and is added to the condensed history calculation performed by the EGSnrc PRESTA-II algorithm. The code is tested with a Fano test adapted for the presence of magnetic fields. The code is compatible with all EGSnrc based applications, including egs++. Ion chamber calculations are compared to experimental measurements and the effect of the code on the efficiency and timing is determined. Agreement with the Fano test's theoretical value is obtained at the 0.1% level for large step-sizes and in magnetic fields as strong as 5 T. The NE2571 dose calculations achieve agreement with the experiment within 0.5% up to 1 T beyond which deviations up to 1.2% are observed. Uniform air gaps of 0.5 and 1 mm and a misalignment of the incoming photon beam with the magnetic field are found to produce variations in the normalized dose on the order of 1%. These findings necessitate a clear definition of all experimental conditions to allow for accurate Monte Carlo simulations. It is found that ion chamber simulation times are increased by only 38%, and a 10 × 10 × 6 cm(3) water phantom with (3 mm)(3) voxels experiences a 48% increase in simulation time as compared to the default EGSnrc with no magnetic field. The incorporation of the effect of the magnetic fields in EGSnrc provides the capability to calculate high accuracy ion chamber and phantom doses for the use in MRI-radiation systems. Further, the effect of apparently insignificant experimental details is found to be accentuated by the presence of the magnetic field.

  10. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  11. Resonant optical scattering in nanoparticle-doped polymer photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumberg, J. J.; Pursiainen, O. L.; Spahn, P.

    2009-11-15

    A broadband hyperspectral technique is used to measure the coherent optical backscatter across a wide spectral bandwidth, showing the resonant suppression of the photon transport mean free path around the photonic bandgap of a shear-assembled polymer photonic crystal. By doping with carbon nanoscale scatterers that reside at specific points within the photonic crystal lattice, the ratio between photon mean free path and optical penetration is tuned from 10 to 1, enhancing forward scatter at the expense of back-scatter. The back-scattering strength of different polarisations is not explained by any current theory.

  12. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  13. Application des codes de Monte Carlo à la radiothérapie par rayonnement à faible TEL

    NASA Astrophysics Data System (ADS)

    Marcié, S.

    1998-04-01

    In radiation therapy, there is low LET rays: photons of 60Co, photons and electrons to 4 at 25 MV created in a linac, photons 137Cs, of 192Ir and of 125I. To know the most exactly possible the dose to the tissu by this rays, software and measurements are used. With the development of the power and the capacity of computers, the application of Monte Carlo codes expand to the radiation therapy which have permitted to better determine effects of rays and spectra, to explicit parameters used in dosimetric calculation, to verify algorithms , to study measuremtents systems and phantoms, to calculate the dose in inaccessible points and to consider the utilization of new radionuclides. En Radiothérapie, il existe une variété, de rayonnements ? faible TLE : photons du cobalt 60, photons et ,électron de 4 à? 25 MV générés dans des accélérateurs linéaires, photons du césium 137, de l'iridium 192 et de l'iode 125. Pour connatre le plus exactement possible la dose délivrée aux tissus par ces rayonnements, des logiciels sont utilisés ainsi que des instruments de mesures. Avec le développement de la puissance et de la capacité, des calculateurs, l'application des codes de Monte Carlo s'est ,étendue ? la Radiothérapie ce qui a permis de mieux cerner les effets des rayonnements, déterminer les spectres, préciser les valeurs des paramètres utilisés dans les calculs dosimétriques, vérifier les algorithmes, ,étudier les systèmes de mesures et les fantomes utilisés, calculer la dose en des points inaccessibles ?à la mesure et envisager l'utilisation de nouveaux radio,éléments.

  14. Advanced optical simulation of scintillation detectors in GATE V8.0: first implementation of a reflectance model based on measured data

    NASA Astrophysics Data System (ADS)

    Stockhoff, Mariele; Jan, Sebastien; Dubois, Albertine; Cherry, Simon R.; Roncali, Emilie

    2017-06-01

    Typical PET detectors are composed of a scintillator coupled to a photodetector that detects scintillation photons produced when high energy gamma photons interact with the crystal. A critical performance factor is the collection efficiency of these scintillation photons, which can be optimized through simulation. Accurate modelling of photon interactions with crystal surfaces is essential in optical simulations, but the existing UNIFIED model in GATE is often inaccurate, especially for rough surfaces. Previously a new approach for modelling surface reflections based on measured surfaces was validated using custom Monte Carlo code. In this work, the LUT Davis model is implemented and validated in GATE and GEANT4, and is made accessible for all users in the nuclear imaging research community. Look-up-tables (LUTs) from various crystal surfaces are calculated based on measured surfaces obtained by atomic force microscopy. The LUTs include photon reflection probabilities and directions depending on incidence angle. We provide LUTs for rough and polished surfaces with different reflectors and coupling media. Validation parameters include light output measured at different depths of interaction in the crystal and photon track lengths, as both parameters are strongly dependent on reflector characteristics and distinguish between models. Results from the GATE/GEANT4 beta version are compared to those from our custom code and experimental data, as well as the UNIFIED model. GATE simulations with the LUT Davis model show average variations in light output of  <2% from the custom code and excellent agreement for track lengths with R 2  >  0.99. Experimental data agree within 9% for relative light output. The new model also simplifies surface definition, as no complex input parameters are needed. The LUT Davis model makes optical simulations for nuclear imaging detectors much more precise, especially for studies with rough crystal surfaces. It will be available in GATE V8.0.

  15. Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview

    NASA Astrophysics Data System (ADS)

    Markman, A.; Javidi, B.

    2016-06-01

    Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.

  16. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin

    2015-07-01

    The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)

  17. Poem: A Fast Monte Carlo Code for the Calculation of X-Ray Transition Zone Dose and Current

    DTIC Science & Technology

    1975-01-15

    stored on the photon interaction data tape. Following the photoelectric ionization the atom will relax emitting either a fluorescent photon or an Auger 50...shell fluorescence yield CL have been obtained from the Storm and Israel1 9 and 25 Bambynek, et al. compilations, with preference given to the...Bambynek compilation, and stored on the photon inter- action data tape. The mean M fluorescence yield wM is approximated by zero. The total electron source

  18. Color-Coded Batteries - Electro-Photonic Inverse Opal Materials for Enhanced Electrochemical Energy Storage and Optically Encoded Diagnostics.

    PubMed

    O'Dwyer, Colm

    2016-07-01

    For consumer electronic devices, long-life, stable, and reasonably fast charging Li-ion batteries with good stable capacities are a necessity. For exciting and important advances in the materials that drive innovations in electrochemical energy storage (EES), modular thin-film solar cells, and wearable, flexible technology of the future, real-time analysis and indication of battery performance and health is crucial. Here, developments in color-coded assessment of battery material performance and diagnostics are described, and a vision for using electro-photonic inverse opal materials and all-optical probes to assess, characterize, and monitor the processes non-destructively in real time are outlined. By structuring any cathode or anode material in the form of a photonic crystal or as a 3D macroporous inverse opal, color-coded "chameleon" battery-strip electrodes may provide an amenable way to distinguish the type of process, the voltage, material and chemical phase changes, remaining capacity, cycle health, and state of charge or discharge of either existing or new materials in Li-ion or emerging alternative battery types, simply by monitoring its color change. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. CRUNCH_PARALLEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaker, Dana E.; Steefel, Carl I.

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  20. Orestes Kinetics Model for the Electra KrF Laser

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R. H.; Myers, M. C.; Sethian, J. D.; Petrov, G.; Wolford, M.; Hegeler, F.

    2003-10-01

    Orestes is a first principles simulation code for the electron deposition, plasma chemistry, laser transport, and amplified spontaneous emission (ASE) in an e-beam pumped KrF laser. Orestes has been benchmarked against results from Nike at NRL and the Keio laser facility. The modeling tasks are to support ongoing oscillator experiments on the Electra laser ( 500 J), to predict performance of Electra as an amplifier, and to develop scaling relations for larger systems such as envisioned for an inertial fusion energy power plant. In Orestes the energy deposition of the primary beam electrons is assumed to be spatially uniform, but the excitation and ionization of the Ar/Kr/F2 target gas by the secondary electrons is determined from the energy distribution function as calculated by a Boltzmann code. The subsequent plasma kinetics of 23 species subject to over 100 reactions is followed with 1-D spatial resolution along the lasing axis. In addition, the vibrational relaxation among excited electronic states of the KrF molecule are included in the kinetics since lasing at 248 nm can occur from several vibrational lines of the B state. Transport of the lasing photons is solved by the method of characteristics. The time dependent ASE is calculated in 3-D using a ``local look-back'' scheme with discrete ordinates and includes specular reflection off the side walls and rear mirror. Gain narrowing is treated by multi-frequency transport of the ASE. Calculations for the gain, saturation intensity, extraction efficiency, and laser output from the Orestes model will be presented and compared with available data from Electra operated as an oscillator. Potential implications for the difference in optimal F2 concentration will be discussed along with the effects of window transmissivity at 248 nm.

  1. The Role of Generation Volume and Photon Recycling in Transport Imaging of Bulk Materials

    DTIC Science & Technology

    2011-12-01

    cobalt . The value and uniformity of the mobility-lifetime product determine the quality of spectral resolution that can be obtained. Figure 1 137...on dopants and defects. Figure 24 Schematic of photon recycling effect To measure the photon recycling in a material, the full emission spectrum

  2. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  3. Current correlations for the transport of interacting electrons through parallel quantum dots in a photon cavity

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Vidar; Abdullah, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei

    2018-06-01

    We calculate the current correlations for the steady-state electron transport through multi-level parallel quantum dots embedded in a short quantum wire, that is placed in a non-perfect photon cavity. We account for the electron-electron Coulomb interaction, and the para- and diamagnetic electron-photon interactions with a stepwise scheme of configuration interactions and truncation of the many-body Fock spaces. In the spectral density of the temporal current-current correlations we identify all the transitions, radiative and non-radiative, active in the system in order to maintain the steady state. We observe strong signs of two types of Rabi oscillations.

  4. A Monte Carlo study on {sup 223}Ra imaging for unsealed radionuclide therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Miwa, Kenta; Sasaki, Masayuki

    Purpose: Radium-223 ({sup 223}Ra), an α-emitting radionuclide, is used in unsealed radionuclide therapy for metastatic bone tumors. The demand for qualitative {sup 223}Ra imaging is growing to optimize dosimetry. The authors simulated {sup 223}Ra imaging using an in-house Monte Carlo simulation code and investigated the feasibility and utility of {sup 223}Ra imaging. Methods: The Monte Carlo code comprises two modules, HEXAGON and NAI. The HEXAGON code simulates the photon and electron interactions in the tissues and collimator, and the NAI code simulates the response of the NaI detector system. A 3D numeric phantom created using computed tomography images of amore » chest phantom was installed in the HEXAGON code. {sup 223}Ra accumulated in a part of the spine, and three x-rays and 19 γ rays between 80 and 450 keV were selected as the emitted photons. To evaluate the quality of the {sup 223}Ra imaging, the authors also simulated technetium-99m ({sup 99m}Tc) imaging under the same conditions and compared the results. Results: The sensitivities of the three photopeaks were 147 counts per unit of source activity (cps MBq{sup −1}; photopeak: 84 keV, full width of energy window: 20%), 166 cps MBq{sup −1} (154 keV, 15%), and 158 cps MBq{sup −1} (270 keV, 10%) for a low-energy general-purpose (LEGP) collimator, and those for the medium-energy general-purpose (MEGP) collimator were 33, 13, and 8.0 cps MBq{sup −1}, respectively. In the case of {sup 99m}Tc, the sensitivity was 55 cps MBq{sup −1} (141 keV, 20%) for LEGP and 52 cps MBq{sup −1} for MEGP. The fractions of unscattered photons of the total photons reflecting the image quality were 0.09 (84 keV), 0.03 (154 keV), and 0.02 (270 keV) for the LEGP collimator and 0.41, 0.25, and 0.50 for the MEGP collimator, respectively. Conversely, this fraction was approximately 0.65 for the simulated {sup 99m}Tc imaging. The sensitivity with the LEGP collimator appeared very high. However, almost all of the counts were because of photons that penetrated or were scattered in the collimator; therefore, the proportions of unscattered photons were small. Conclusions: Their simulation study revealed that the most promising scheme for {sup 223}Ra imaging is an 84-keV window using an MEGP collimator. The sensitivity of the photopeaks above 100 keV is too low for {sup 223}Ra imaging. A comparison of the fractions of unscattered photons reveals that the sensitivity and image quality are approximately two-thirds of those for {sup 99m}Tc imaging.« less

  5. Monte Carlo Interpretation of the Photon Heating Measurements in the Integral AMMON/REF Experiment in the EOLE Facility

    NASA Astrophysics Data System (ADS)

    Vaglio-Gaudard, C.; Stoll, K.; Ravaux, S.; Lemaire, M.; Colombier, A. C.; Hudelot, J. P.; Bernard, D.; Amharrak, H.; Di Salvo, J.; Gruel, A.

    2014-02-01

    An experiment named AMMON is dedicated to the analysis of the neutron and photon physics of the Jules Horowitz Reactor (JHR). AMMON, performed in the EOLE zero-power experimental reactor at CEA Cadarache, is finished since April 2013. Photon heating measurements were performed with both Thermoluminescent Dosimeters (TLD-400s) and Optically-Stimulated Dosimeters (OSLDs) in three AMMON configurations. The objective is to provide data for the experimental validation of the JHR photon calculation tool. The first analysis of the photon heating measurements of the reference configuration (AMMON/REF) is presented in this paper. The reference configuration consists of an experimental zone of 7 JHR assemblies with U3Si2 - Al 27% 235U enriched fuel curved plates surrounded by a driver zone with 623 standard PWR UOx fuel pins. The photon heating has been measured in the aluminum follower of the central and peripheral assemblies, and in aluminum fillers in the rack between assemblies. The measurement analysis is based on Monte Carlo TRIPOLI-4 ® version 8.1 calculations modeling the core exact three-dimensional geometry. The JEFF nuclear data library is used for the calculation of the neutron transport and the photon emission in the AMMON/REF experiment. The photon transport is made on the basis of the EPDL97 photo-atomic library. The prompt and delayed doses deposited in dosimeters have been estimated separately. The transport of 4 (neutrons, photons, electrons and positrons) or 3 particles (photons, electrons and positrons) is simulated in the calculations for the AMMON/REF analysis, depending whether the prompt or delayed dose is calculated. The TRIPOLI-4.8.1 ® calculations makes it possible the modeling of the electromagnetic cascade shower with both electrons and positrons. The delayed dose represents about 25% of the total photon energy deposition in the dosimeters. The comparison between Calculation and Experiment brings into relief a slight systematic underestimation of the calculated global photon energy deposition: (C - E)/E = - 8% ±4.5% (1σ). A special care has been directed towards the determination of the uncertainty associated with the (C-E)/E values. The slight underestimation could be probably explained by an underestimation in the photon emission with the JEFF library.

  6. A Monte Carlo Ray Tracing Model to Improve Simulations of Solar-Induced Chlorophyll Fluorescence Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Halubok, M.; Gu, L.; Yang, Z. L.

    2017-12-01

    A model of light transport in a three-dimensional vegetation canopy is being designed and evaluated. The model employs Monte Carlo ray tracing technique which offers simple yet rigorous approach of quantifying the photon transport in a plant canopy. This method involves simulation of a chain of scattering and absorption events incurred by a photon on its path from the light source. Implementation of weighting mechanism helps avoid `all-or-nothing' type of interaction between a photon packet and a canopy element, i.e. at each interaction a photon packet is split into three parts, namely, reflected, transmitted and absorbed, instead of assuming complete absorption, reflection or transmission. Canopy scenes in the model are represented by a number of polygons with specified set of reflectances and transmittances. The performance of the model is being evaluated through comparison against established plant canopy reflectance models, such as 3D Radiosity-Graphics combined model which calculates bidirectional reflectance distribution function of a 3D canopy scene. This photon transport model is to be coupled to a leaf level solar-induced chlorophyll fluorescence (SIF) model with the aim of further advancing of accuracy of the modeled SIF, which, in its turn, has a potential of improving our predictive capability of terrestrial carbon uptake.

  7. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  8. The difference of scoring dose to water or tissues in Monte Carlo dose calculations for low energy brachytherapy photon sources.

    PubMed

    Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank

    2011-03-01

    The goal of this work is to compare D(m,m) (radiation transported in medium; dose scored in medium) and D(w,m) (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether ap plying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: 125I, 103Pd, and 131Cs seeds, as well as an EBS operating at 50 kV. Ratios of D(w,m) over D(m,m) are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using 103Pd) and prostate (using 125I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D90 values are compared for D(w,m) and D(m,m). (1) Differences (D(w,m)/D(m,m)-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D(w,m)/D(m,m) is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D90(w,m) over D90(m,m) for clinical implants matches D(w,m)/D(m,m) at 1 cm from the single point sources, Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.

  9. Large-area metallic photonic lattices for military applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luk, Ting Shan

    2007-11-01

    In this project we developed photonic crystal modeling capability and fabrication technology that is scaleable to large area. An intelligent optimization code was developed to find the optimal structure for the desired spectral response. In terms of fabrication, an exhaustive survey of fabrication techniques that would meet the large area requirement was reduced to Deep X-ray Lithography (DXRL) and nano-imprint. Using DXRL, we fabricated a gold logpile photonic crystal in the <100> plane. For the nano-imprint technique, we fabricated a cubic array of gold squares. These two examples also represent two classes of metallic photonic crystal topologies, the connected networkmore » and cermet arrangement.« less

  10. Amorphous photonic crystals with only short-range order.

    PubMed

    Shi, Lei; Zhang, Yafeng; Dong, Biqin; Zhan, Tianrong; Liu, Xiaohan; Zi, Jian

    2013-10-04

    Distinct from conventional photonic crystals with both short- and long-range order, amorphous photonic crystals that possess only short-range order show interesting optical responses owing to their unique structural features. Amorphous photonic crystals exhibit unique light scattering and transport, which lead to a variety of interesting phenomena such as isotropic photonic bandgaps or pseudogaps, noniridescent structural colors, and light localization. Recent experimental and theoretical advances in the study of amorphous photonic crystals are summarized, focusing on their unique optical properties, artificial fabrication, bionspiration, and potential applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Benchmark test of neutron transport calculations: indium, nickel, gold, europium, and cobalt activation with and without energy moderated fission neutrons by iron simulating the Hiroshima atomic bomb casing.

    PubMed

    Iwatani, K; Hoshi, M; Shizuma, K; Hiraoka, M; Hayakawa, N; Oka, T; Hasai, H

    1994-10-01

    A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a bare- and energy-moderated 252Cf fission neutron source which was obtained by transmission through 10-cm-thick iron. An iron plate was used to simulate the effect of the Hiroshima atomic bomb casing. This test includes the activation of indium and nickel for fast neutrons and gold, europium, and cobalt for thermal and epithermal neutrons, which were inserted in the moderators. The latter two activations are also to validate 152Eu and 60Co activity data obtained from the atomic bomb-exposed specimens collected at Hiroshima and Nagasaki, Japan. The neutron moderators used were Lucite and Nylon 6 and the total thickness of each moderator was 60 cm or 65 cm. Measured activity data (reaction yield) of the neutron-irradiated detectors in these moderators decreased to about 1/1,000th or 1/10,000th, which corresponds to about 1,500 m ground distance from the hypocenter in Hiroshima. For all of the indium, nickel, and gold activity data, the measured and calculated values agreed within 25%, and the corresponding values for europium and cobalt were within 40%. From this study, the MCNP code was found to be accurate enough for the bare- and energy-moderated 252Cf neutron activation calculations of these elements using moderators containing hydrogen, carbon, nitrogen, and oxygen.

  12. On-chip generation of heralded photon-number states

    NASA Astrophysics Data System (ADS)

    Vergyris, Panagiotis; Meany, Thomas; Lunghi, Tommaso; Sauder, Gregory; Downes, James; Steel, M. J.; Withford, Michael J.; Alibart, Olivier; Tanzilli, Sébastien

    2016-10-01

    Beyond the use of genuine monolithic integrated optical platforms, we report here a hybrid strategy enabling on-chip generation of configurable heralded two-photon states. More specifically, we combine two different fabrication techniques, i.e., non-linear waveguides on lithium niobate for efficient photon-pair generation and femtosecond-laser-direct-written waveguides on glass for photon manipulation. Through real-time device manipulation capabilities, a variety of path-coded heralded two-photon states can be produced, ranging from product to entangled states. Those states are engineered with high levels of purity, assessed by fidelities of 99.5 ± 8% and 95.0 ± 8%, respectively, obtained via quantum interferometric measurements. Our strategy therefore stands as a milestone for further exploiting entanglement-based protocols, relying on engineered quantum states, and enabled by scalable and compatible photonic circuits.

  13. On-chip generation of heralded photon-number states

    PubMed Central

    Vergyris, Panagiotis; Meany, Thomas; Lunghi, Tommaso; Sauder, Gregory; Downes, James; Steel, M. J.; Withford, Michael J.; Alibart, Olivier; Tanzilli, Sébastien

    2016-01-01

    Beyond the use of genuine monolithic integrated optical platforms, we report here a hybrid strategy enabling on-chip generation of configurable heralded two-photon states. More specifically, we combine two different fabrication techniques, i.e., non-linear waveguides on lithium niobate for efficient photon-pair generation and femtosecond-laser-direct-written waveguides on glass for photon manipulation. Through real-time device manipulation capabilities, a variety of path-coded heralded two-photon states can be produced, ranging from product to entangled states. Those states are engineered with high levels of purity, assessed by fidelities of 99.5 ± 8% and 95.0 ± 8%, respectively, obtained via quantum interferometric measurements. Our strategy therefore stands as a milestone for further exploiting entanglement-based protocols, relying on engineered quantum states, and enabled by scalable and compatible photonic circuits. PMID:27775062

  14. Integrated system for production of neutronics and photonics calculational constants. Program SIGMA1 (Version 77-1): Doppler broaden evaluated cross sections in the Evaluated Nuclear Data File/Version B (ENDF/B) format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.

    1977-01-12

    A code, SIGMA1, has been designed to Doppler broaden evaluated cross sections in the ENDF/B format. The code can only be applied to tabulated data that vary linearly in energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code.

  15. Secondary gamma-ray production in a coded aperture mask

    NASA Technical Reports Server (NTRS)

    Owens, A.; Frye, G. M., Jr.; Hall, C. J.; Jenkins, T. L.; Pendleton, G. N.; Carter, J. N.; Ramsden, D.; Agrinier, B.; Bonfand, E.; Gouiffes, C.

    1985-01-01

    The application of the coded aperture mask to high energy gamma-ray astronomy will provide the capability of locating a cosmic gamma-ray point source with a precision of a few arc-minutes above 20 MeV. Recent tests using a mask in conjunction with drift chamber detectors have shown that the expected point spread function is achieved over an acceptance cone of 25 deg. A telescope employing this technique differs from a conventional telescope only in that the presence of the mask modifies the radiation field in the vicinity of the detection plane. In addition to reducing the primary photon flux incident on the detector by absorption in the mask elements, the mask will also be a secondary radiator of gamma-rays. The various background components in a CAMTRAC (Coded Aperture Mask Track Chamber) telescope are considered. Monte-Carlo calculations are compared with recent measurements obtained using a prototype instrument in a tagged photon beam line.

  16. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

    2010-01-01

    The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

  17. Implementation of an anomalous radial transport model for continuum kinetic edge codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2007-11-01

    Radial plasma transport in magnetic fusion devices is often dominated by plasma turbulence compared to neoclassical collisional transport. Continuum kinetic edge codes [such as the (2d,2v) transport version of TEMPEST and also EGK] compute the collisional transport directly, but there is a need to model the anomalous transport from turbulence for long-time transport simulations. Such a model is presented and results are shown for its implementation in the TEMPEST gyrokinetic edge code. The model includes velocity-dependent convection and diffusion coefficients expressed as a Hermite polynominals in velocity. The specification of the Hermite coefficients can be set, e.g., by specifying the ratio of particle and energy transport as in fluid transport codes. The anomalous transport terms preserve the property of no particle flux into unphysical regions of velocity space. TEMPEST simulations are presented showing the separate control of particle and energy anomalous transport, and comparisons are made with neoclassical transport also included.

  18. Investigation of Natural and Man-Made Radiation Effects on Crews on Long Duration Space Missions

    NASA Technical Reports Server (NTRS)

    Bolch, Wesley E.; Parlos, Alexander

    1996-01-01

    Over the past several years, NASA has studied a variety of mission scenarios designed to establish a permanent human presence on the surface of Mars. Nuclear electric propulsion (NEP) is one of the possible elements in this program. During the initial stages of vehicle design work, careful consideration must be given to not only the shielding requirements of natural space radiation, but to the shielding and configuration requirements of the on-board reactors. In this work, the radiation transport code MCNP has been used to make initial estimates of crew exposures to reactor radiation fields for a specific manned NEP vehicle design. In this design, three 25 MW(sub th), scaled SP-100-class reactors are shielded by three identical shields. Each shield has layers of beryllium, tungsten, and lithium hydride between the reactor and the crew compartment. Separate calculations are made of both the exiting neutron and gamma fluxes from the reactors during beginning-of-life, full-power operation. This data is then used as the source terms for particle transport in MCNP. The total gamma and neutron fluxes exiting the reactor shields are recorded and separate transport calculations are then performed for a 10 g/sq cm crew compartment aluminum thickness. Estimates of crew exposures have been assessed for various thicknesses of the shield tungsten and lithium hydride layers. A minimal tungsten thickness of 20 cm is required to shield the reactor photons below the 0.05 Sv/y man-made radiation limit. In addition to a 20-cm thick tungsten layer, a 40-cm thick lithium hydride layer is required to shield the reactor neutrons below the annual limit. If the tungsten layer is 30-cm thick, the lithium hydride layer should be at least 30-cm thick. These estimates do not take into account the photons generated by neutron interactions inside the shield because the MCNP neutron cross sections did not allow reliable estimates of photon production in these materials. These results, along with natural space radiation shielding estimates calculated by NASA Langley Research Center, have been used to provide preliminary input data into a new Macintosh-based software tool. A skeletal version of this tool being developed will allow rapid radiation exposure and risk analyses to be performed on a variety of Lunar and Mars missions utilizing nuclear-powered vehicles.

  19. Effective description of a 3D object for photon transportation in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Suganuma, R.; Ogawa, K.

    2000-06-01

    Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.

  20. PHITS-2.76, Particle and Heavy Ion Transport code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-08-01

    Version 03 PHITS can deal with the transport of almost all particles (nucleons, nuclei, mesons, photons, and electrons) over wide energy ranges, using several nuclear reaction models and nuclear data libraries. Geometrical configuration of the simulation can be set with GG (General Geometry) or CG (Combinatorial Geometry). Various quantities such as heat deposition, track length and production yields can be deduced from the simulation, using implemented estimator functions called "tally". The code also has a function to draw 2D and 3D figures of the calculated results as well as the setup geometries, using a code ANGEL. The physical processes includedmore » in PHITS can be divided into two categories, transport process and collision process. In the transport process, PHITS can simulate motion of particles under external fields such as magnetic and gravity. Without the external fields, neutral particles move along a straight trajectory with constant energy up to the next collision point. However, charge particles interact many times with electrons in the material losing energy and changing direction. PHITS treats ionization processes not as collision but as a transport process, using the continuous-slowing-down approximation. The average stopping power is given by the charge density of the material and the momentum of the particle taking into account the fluctuations of the energy loss and the angular deviation. In the collision process, PHITS can simulate the elastic and inelastic interactions as well as decay of particles. The total reaction cross section, or the life time of the particle is an essential quantity in the determination of the mean free path of the transport particle. According to the mean free path, PHITS chooses the next collision point using the Monte Carlo method. To generate the secondary particles of the collision, we need the information of the final states of the collision. For neutron induced reactions in low energy region, PHITS employs the cross sections from evaluated nuclear data libraries JENDL-4.0 (Shibata et al 2011). For high energy neutrons and other particles, we have incorporated several models such as JAM (Nara et al 1999), INCL (Cugnon et al 2011), INCL-ELF (Sawada et al 2012) and JQMD (Niita et al 1995) to simulate nuclear reactions up to 100 GeV/u. The special features of PHITS are the event generator mode (Iwamoto et al 2007) and the microdosimetric function (Sato et al 2009). Owing to the event generator mode, PHITS can determine the profiles of all secondary particles generated from a single nuclear interaction even using nuclear data libraries, taking the momentum and energy conservations into account. The microdosimetric function gives the probability densities of deposition energy in microscopic sites such as lineal energy y and specific energy z, using the mathematical model developed based on the results of the track structure simulation. These features are very important for various purposes such as the estimations of soft-error rates of semi-conductor devices induced by neutrons, and relative biological effectiveness of charged particles. From version 2.64, Prompt gamma spectrum and isomer production rates can be precisely estimated, owing to the implementation of EBITEM (ENSDF-Based Isomeric Transition and isomEr production Model). The photo-nuclear reaction model was improved up to 140 MeV. From version 2.76, electron and photon transport algorithm based on EGS5 (Hirayama et al. 2005) was incorporated. Models for describing photo-nuclear reaction above 140 MeV and muon-nuclear reaction were implemented. Event-generator mode version 2 was developed. Relativistic theory can be considered in the JQMD model.« less

  1. Comparison of calculations and measurements of the off-axis radiation dose (SI) in liquid nitrogen as a function of radiation length

    NASA Astrophysics Data System (ADS)

    Cromar, P. F.

    1984-12-01

    In this thesis results are presented from a study of the off-axis X and Gamma radiation field caused by a highly relativistic electron beam in liquid Nitrogen at various path lengths out to 2 radiation lengths. The off-axis dose in Silicon was calculated using electron/photon transport code CYLTRAN and measured using thermal luminescent dosimeters (TLD's). Calculations were performed on a CDC-7600 computer ar Los Alamos National Laboratory and measurements were made using the Naval Postgraduate School 100 Mev Linac. Comparison of the results is made and CYLTRAN is found to be in agreement with experimentally measured values. The CYLTRAN results are extended to the off-axis dose caused by a 100 Mev electron beam in air at Standard Temperature and Pressure (STP).

  2. Measurement and interpretation of electron angle at MABE beam stop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanford, T.W.L.; Coleman, P.D.; Poukey, J.W.

    1985-02-01

    The mean angle of incidence at the beam stop of a 60 kA, 7 MV annular electron beam, in the 20 kG guide field of the MABE accelerator, is determined. Radiation dose measured in TLD arrays mounted downstream of the stop is compared with the radiation dose expected using a CYLTRAN Monte Carlo simulation of the electron/photon transport in the stop as a function of incident angles and energies. All radiation profiles measured are well fit, if the electrons are assumed to be incident with a polar angle theta of 15/sup 0/ +- 2/sup 0/. Comparing this theta with thatmore » expected from the Adler-Miller model, and a MAGIC code simulation of beam behavior at the stop enables the mean transverse beam velocity to be estimated.« less

  3. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  4. Absolute Photoionization Cross Section for Fe6+ to Fe10+ Ions in the Photon Energy Region of the 2p–3d Resonance Lines

    NASA Astrophysics Data System (ADS)

    Blancard, C.; Cubaynes, D.; Guilbaud, S.; Bizau, J.-M.

    2018-01-01

    Resonant single photoionization cross sections of Fen+ (n = 6 to 10) ions have been measured in absolute values using a merged-beams setup at the SOLEIL synchrotron radiation facility. Photon energies were between about 710 and 780 eV, covering the range of the 2p–3d transitions. The experimental cross sections are compared to calculations we performed using a multi-configuration Dirac–Fock code and the OPAS code dedicated to radiative opacity calculations. Comparisons are also done with the Chandra X-ray observatory NGC 3783 spectra and with the results of previously published calculations.

  5. The X-Ray Polarization of the Accretion Disk Coronae of Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Beheshtipour, Banafsheh; Krawczynski, Henric; Malzac, Julien

    2017-11-01

    Hard X-rays observed in Active Galactic Nuclei (AGNs) are thought to originate from the Comptonization of the optical/UV accretion disk photons in a hot corona. Polarization studies of these photons can help to constrain the corona geometry and the plasma properties. We have developed a ray-tracing code that simulates the Comptonization of accretion disk photons in coronae of arbitrary shapes, and use it here to study the polarization of the X-ray emission from wedge and spherical coronae. We study the predicted polarization signatures for the fully relativistic and various approximate treatments of the elemental Compton scattering processes. We furthermore use the code to evaluate the impact of nonthermal electrons and cyclo-synchrotron photons on the polarization properties. Finally, we model the NuSTAR observations of the Seyfert I galaxy Mrk 335 and predict the associated polarization signal. Our studies show that X-ray polarimetry missions such as NASA’s Imaging X-ray Polarimetry Explorer and the X-ray Imaging Polarimetry Explorer proposed to ESA will provide valuable new information about the physical properties of the plasma close to the event horizon of AGN black holes.

  6. Dosimetric characterization of the M−15 high‐dose‐rate Iridium−192 brachytherapy source using the AAPM and ESTRO formalism

    PubMed Central

    Thanh, Minh‐Tri Ho; Munro, John J.

    2015-01-01

    The Source Production & Equipment Co. (SPEC) model M−15 is a new Iridium−192 brachytherapy source model intended for use as a temporary high‐dose‐rate (HDR) brachytherapy source for the Nucletron microSelectron Classic afterloading system. The purpose of this study is to characterize this HDR source for clinical application by obtaining a complete set of Monte Carlo calculated dosimetric parameters for the M‐15, as recommended by AAPM and ESTRO, for isotopes with average energies greater than 50 keV. This was accomplished by using the MCNP6 Monte Carlo code to simulate the resulting source dosimetry at various points within a pseudoinfinite water phantom. These dosimetric values next were converted into the AAPM and ESTRO dosimetry parameters and the respective statistical uncertainty in each parameter also calculated and presented. The M−15 source was modeled in an MCNP6 Monte Carlo environment using the physical source specifications provided by the manufacturer. Iridium−192 photons were uniformly generated inside the iridium core of the model M−15 with photon and secondary electron transport replicated using photoatomic cross‐sectional tables supplied with MCNP6. Simulations were performed for both water and air/vacuum computer models with a total of 4×109 sources photon history for each simulation and the in‐air photon spectrum filtered to remove low‐energy photons below δ=10%keV. Dosimetric data, including D(r,θ),gL(r),F(r,θ),Φan(r), and φ¯an, and their statistical uncertainty were calculated from the output of an MCNP model consisting of an M−15 source placed at the center of a spherical water phantom of 100 cm diameter. The air kerma strength in free space, SK, and dose rate constant, Λ, also was computed from a MCNP model with M−15 Iridium−192 source, was centered at the origin of an evacuated phantom in which a critical volume containing air at STP was added 100 cm from the source center. The reference dose rate, D˙(r0,θ0)≡D˙(1cm,π/2), is found to be 4.038±0.064 cGy mCi−1 h−1. The air kerma strength, SK, is reported to be 3.632±0.086 cGy cm2 mCi−1 g−1, and the dose rate constant, Λ, is calculated to be 1.112±0.029 cGy h−1 U−1. The normalized dose rate, radial dose function, and anisotropy function with their uncertainties were computed and are represented in both tabular and graphical format in the report. A dosimetric study was performed of the new M−15 Iridium−192 HDR brachytherapy source using the MCNP6 radiation transport code. Dosimetric parameters, including the dose‐rate constant, radial dose function, and anisotropy function, were calculated in accordance with the updated AAPM and ESTRO dosimetric parameters for brachytherapy sources of average energy greater than 50 keV. These data therefore may be applied toward the development of a treatment planning program and for clinical use of the source. PACS numbers: 87.56.bg, 87.53.Jw PMID:26103489

  7. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.

  8. Diagnosis of energy transport in iron buried layer targets using an extreme ultraviolet laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahzad, M.; Culfa, O.; Rossall, A. K.

    2015-02-15

    We demonstrate the use of extreme ultra-violet (EUV) laboratory lasers in probing energy transport in laser irradiated solid targets. EUV transmission through targets containing a thin layer of iron (50 nm) encased in plastic (CH) after irradiation by a short pulse (35 fs) laser focussed to irradiances 3 × 10{sup 16} Wcm{sup −2} is measured. Heating of the iron layer gives rise to a rapid decrease in EUV opacity and an increase in the transmission of the 13.9 nm laser radiation as the iron ionizes to Fe{sup 5+} and above where the ion ionisation energy is greater than the EUV probe photon energy (89 eV).more » A one dimensional hydrodynamic fluid code HYADES has been used to simulate the temporal variation in EUV transmission (wavelength 13.9 nm) using IMP opacity values for the iron layer and the simulated transmissions are compared to measured transmission values. When a deliberate pre-pulse is used to preform an expanding plastic plasma, it is found that radiation is important in the heating of the iron layer while for pre-pulse free irradiation, radiation transport is not significant.« less

  9. MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pater, P; Vallieres, M; Seuntjens, J

    2014-06-15

    Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dosemore » deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less

  10. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  11. Monte Carlo code G3sim for simulation of plastic scintillator detectors with wavelength shifter fiber readout.

    PubMed

    Mohanty, P K; Dugad, S R; Gupta, S K

    2012-04-01

    A detailed description of a compact Monte Carlo simulation code "G3sim" for studying the performance of a plastic scintillator detector with wavelength shifter (WLS) fiber readout is presented. G3sim was developed for optimizing the design of new scintillator detectors used in the GRAPES-3 extensive air shower experiment. Propagation of the blue photons produced by the passage of relativistic charged particles in the scintillator is treated by incorporating the absorption, total internal, and diffuse reflections. Capture of blue photons by the WLS fibers and subsequent re-emission of longer wavelength green photons is appropriately treated. The trapping and propagation of green photons inside the WLS fiber is treated using the laws of optics for meridional and skew rays. Propagation time of each photon is taken into account for the generation of the electrical signal at the photomultiplier. A comparison of the results from G3sim with the performance of a prototype scintillator detector showed an excellent agreement between the simulated and measured properties. The simulation results can be parametrized in terms of exponential functions providing a deeper insight into the functioning of these versatile detectors. G3sim can be used to aid the design and optimize the performance of scintillator detectors prior to actual fabrication that may result in a considerable saving of time, labor, and money spent. © 2012 American Institute of Physics

  12. Organ dose conversion coefficients based on a voxel mouse model and MCNP code for external photon irradiation.

    PubMed

    Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan

    2012-01-01

    A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.

  13. Simulation of photons from plasmas for the applications to display devices

    NASA Astrophysics Data System (ADS)

    Lee, Hae June; Yoon, Hyun Jin; Lee, Jae Koo

    2007-07-01

    Numerical modeling of the photon transport of the ultraviolet (UV) and the visible lights are presented for plasma based display devices. The transport of UV lights which undergo resonance trapping by ground state atoms is solved by using the Holstein equation. After the UV lights are transformed to visible lights at the phosphor surfaces, the visible lights experience complicated traces inside the cell and finally are emitted toward the viewing window after having some power loss within the cell. A three-dimensional ray trace of the visible lights is calculated with a radiosity model. These simulations for the photons strengthen plasma discharge modeling for the application to display devices.

  14. Comparison of HORACE and PHOTOS Algorithms for Multi-Photon Emission in the Context of the W Boson Mass Measurement

    DOE PAGES

    Kotwal, Ashutosh V.; Jayatilaka, Bodhitha

    2016-01-01

    W boson mass measurement is sensitive to QED radiative corrections due to virtual photon loops and real photon emission. The largest shift in the measured mass, which depends on the transverse momentum spectrum of the charged lepton from the boson decay, is caused by the emission of real photons from the final-state lepton. There are a number of calculations and codes available to model the final-state photon emission. We perform a detailed study, comparing the results from HORACE and PHOTOS implementations of the final-state multiphoton emission in the context of a direct measurement ofW boson mass at Tevatron. Mass fitsmore » are performed using a simulation of the CDF II detector.« less

  15. Experimental Ten-Photon Entanglement.

    PubMed

    Wang, Xi-Lin; Chen, Luo-Kan; Li, W; Huang, H-L; Liu, C; Chen, C; Luo, Y-H; Su, Z-E; Wu, D; Li, Z-D; Lu, H; Hu, Y; Jiang, X; Peng, C-Z; Li, L; Liu, N-L; Chen, Yu-Ao; Lu, Chao-Yang; Pan, Jian-Wei

    2016-11-18

    We report the first experimental demonstration of quantum entanglement among ten spatially separated single photons. A near-optimal entangled photon-pair source was developed with simultaneously a source brightness of ∼12  MHz/W, a collection efficiency of ∼70%, and an indistinguishability of ∼91% between independent photons, which was used for a step-by-step engineering of multiphoton entanglement. Under a pump power of 0.57 W, the ten-photon count rate was increased by about 2 orders of magnitude compared to previous experiments, while maintaining a state fidelity sufficiently high for proving the genuine ten-particle entanglement. Our work created a state-of-the-art platform for multiphoton experiments, and enabled technologies for challenging optical quantum information tasks, such as the realization of Shor's error correction code and high-efficiency scattershot boson sampling.

  16. Anisotropic imaging performance in indirect x-ray imaging detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badano, Aldo; Kyprianou, Iacovos S.; Sempau, Josep

    We report on the variability in imaging system performance due to oblique x-ray incidence, and the associated transport of quanta (both x rays and optical photons) through the phosphor, in columnar indirect digital detectors. The analysis uses MANTIS, a combined x-ray, electron, and optical Monte Carlo transport code freely available. We describe the main features of the simulation method and provide some validation of the phosphor screen models considered in this work. We report x-ray and electron three-dimensional energy deposition distributions and point-response functions (PRFs), including optical spread in columnar phosphor screens of thickness 100 and 500 {mu}m, for 19,more » 39, 59, and 79 keV monoenergetic x-ray beams incident at 0 deg., 10 deg., and 15 deg. . In addition, we present pulse-height spectra for the same phosphor thickness, x-ray energies, and angles of incidence. Our results suggest that the PRF due to the phosphor blur is highly nonsymmetrical, and that the resolution properties of a columnar screen in a tomographic, or tomosynthetic imaging system varies significantly with the angle of x-ray incidence. Moreover, we find that the noise due to the variability in the number of light photons detected per primary x-ray interaction, summarized in the information or Swank factor, is somewhat independent of thickness and incidence angle of the x-ray beam. Our results also suggest that the anisotropy in the PRF is not less in screens with absorptive backings, while the noise introduced by variations in the gain and optical transport is larger. Predictions from MANTIS, after additional validation, can provide the needed understanding of the extent of such variations, and eventually, lead to the incorporation of the changes in imaging performance with incidence angle into the reconstruction algorithms for volumetric x-ray imaging systems.« less

  17. Dosimetry of Al2O3 optically stimulated luminescent dosimeter at high energy photons and electrons

    NASA Astrophysics Data System (ADS)

    Yusof, M. F. Mohd; Joohari, N. A.; Abdullah, R.; Shukor, N. S. Abd; Kadir, A. B. Abd; Isa, N. Mohd

    2018-01-01

    The linearity of Al2O3 OSL dosimeters (OSLD) were evaluated for dosimetry works in clinical photons and electrons. The measurements were made at a reference depth of Zref according to IAEA TRS 398:2000 codes of practice at 6 and 10 MV photons and 6 and 9 MeV electrons. The measured dose was compared to the thermoluminescence dosimeters (TLD) and ionization chamber commonly used for dosimetry works for higher energy photons and electrons. The results showed that the measured dose in OSL dosimeters were in good agreement with the reported by the ionization chamber in both high energy photons and electrons. A reproducibility test also reported excellent consistency of readings with the OSL at similar energy levels. The overall results confirmed the suitability of OSL dosimeters for dosimetry works involving high energy photons and electrons in radiotherapy.

  18. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; Koju, Vijay; John, Dwayne O

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scatteringmore » sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.« less

  19. Calculated organ doses for Mayak production association central hall using ICRP and MCNP.

    PubMed

    Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M

    2003-03-01

    As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.

  20. Topological phase transitions and chiral inelastic transport induced by the squeezing of light

    PubMed Central

    Peano, Vittorio; Houde, Martin; Brendel, Christian; Marquardt, Florian; Clerk, Aashish A.

    2016-01-01

    There is enormous interest in engineering topological photonic systems. Despite intense activity, most works on topological photonic states (and more generally bosonic states) amount in the end to replicating a well-known fermionic single-particle Hamiltonian. Here we show how the squeezing of light can lead to the formation of qualitatively new kinds of topological states. Such states are characterized by non-trivial Chern numbers, and exhibit protected edge modes, which give rise to chiral elastic and inelastic photon transport. These topological bosonic states are not equivalent to their fermionic (topological superconductor) counterparts and, in addition, cannot be mapped by a local transformation onto topological states found in particle-conserving models. They thus represent a new type of topological system. We study this physics in detail in the case of a kagome lattice model, and discuss possible realizations using nonlinear photonic crystals or superconducting circuits. PMID:26931620

  1. Topological phase transitions and chiral inelastic transport induced by the squeezing of light

    NASA Astrophysics Data System (ADS)

    Peano, Vittorio; Houde, Martin; Brendel, Christian; Marquardt, Florian; Clerk, Aashish A.

    2016-03-01

    There is enormous interest in engineering topological photonic systems. Despite intense activity, most works on topological photonic states (and more generally bosonic states) amount in the end to replicating a well-known fermionic single-particle Hamiltonian. Here we show how the squeezing of light can lead to the formation of qualitatively new kinds of topological states. Such states are characterized by non-trivial Chern numbers, and exhibit protected edge modes, which give rise to chiral elastic and inelastic photon transport. These topological bosonic states are not equivalent to their fermionic (topological superconductor) counterparts and, in addition, cannot be mapped by a local transformation onto topological states found in particle-conserving models. They thus represent a new type of topological system. We study this physics in detail in the case of a kagome lattice model, and discuss possible realizations using nonlinear photonic crystals or superconducting circuits.

  2. Two-photon absorption of [2.2]paracyclophane derivatives in solution: A theoretical investigation

    NASA Astrophysics Data System (ADS)

    Ferrighi, Lara; Frediani, Luca; Fossgaard, Eirik; Ruud, Kenneth

    2007-12-01

    The two-photon absorption of a class of [2.2]paracyclophane derivatives has been studied using quadratic response and density functional theories. For the molecules investigated, several effects influencing the two-photon absorption spectra have been investigated, such as side-chain elongation, hydrogen bonding, the use of ionic species, and solvent effects, the latter described by the polarizable continuum model. The calculations have been carried out using a recent parallel implementation of the polarizable continuum model in the DALTON code. Special attention is given to those aspects that could explain the large solvent effect on the two-photon absorption cross sections observed experimentally for this class of compounds.

  3. Neutron dose measurements of Varian and Elekta linacs by TLD600 and TLD700 dosimeters and comparison with MCNP calculations

    PubMed Central

    Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel

    2014-01-01

    High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by 241Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy-1 for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy-1 achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production. PMID:24600167

  4. Neutron dose measurements of Varian and Elekta linacs by TLD600 and TLD700 dosimeters and comparison with MCNP calculations.

    PubMed

    Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel

    2014-01-01

    High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by (241)Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy(-1) for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy(-1) achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production.

  5. Radiation-mediated Shocks in Gamma-Ray Bursts: Pair Creation

    NASA Astrophysics Data System (ADS)

    Lundman, Christoffer; Beloborodov, Andrei M.; Vurm, Indrek

    2018-05-01

    Relativistic sub-photospheric shocks are a possible mechanism for producing prompt gamma-ray burst (GRB) emission. Such shocks are mediated by scattering of radiation. We introduce a time-dependent, special relativistic code which dynamically couples Monte Carlo radiative transfer to the flow hydrodynamics. The code also self-consistently follows electron–positron pair production in photon–photon collisions. We use the code to simulate shocks with properties relevant to GRBs. We focus on plane-parallel solutions, which are accurate deep below the photosphere. The shock generates a power-law photon spectrum through the first-order Fermi mechanism, extending upward from the typical upstream photon energy. Strong (high Mach number) shocks produce rising νF ν spectra. We observe that in non-relativistic shocks the spectrum extends to {E}\\max ∼ {m}e{v}2, where v is the speed difference between the upstream and downstream. In relativistic shocks the spectrum extends to energies E> 0.1 {m}e{c}2 where its slope softens due to Klein–Nishina effects. Shocks with Lorentz factors γ > 1.5 are prolific producers of electron–positron pairs, yielding hundreds of pairs per proton. The main effect of pairs is to reduce the shock width by a factor of ∼ {Z}+/- -1. Most pairs annihilate far downstream of the shock, and the radiation spectrum relaxes to a Wien distribution, reaching equilibrium with the plasma at a temperature determined by the shock jump conditions and the photon number per proton. We discuss the implications of our results for observations of radiation generated by sub-photospheric shocks.

  6. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  7. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  8. Observation of backscattering-immune chiral electromagnetic modes without time reversal breaking.

    PubMed

    Chen, Wen-Jie; Hang, Zhi Hong; Dong, Jian-Wen; Xiao, Xiao; Wang, He-Zhou; Chan, C T

    2011-07-08

    A strategy is proposed to realize robust transport in a time reversal invariant photonic system. Using numerical simulation and a microwave experiment, we demonstrate that a chiral guided mode in the channel of a three-dimensional dielectric layer-by-layer photonic crystal is immune to the scattering of a square patch of metal or dielectric inserted to block the channel. The chirality based robust transport can be realized in nonmagnetic dielectric materials without any external field.

  9. A Kinetics Model for KrF Laser Amplifiers

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.

    1999-11-01

    A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.

  10. Kinetic Modeling of Ultraintense X-ray Laser-Matter Interactions

    NASA Astrophysics Data System (ADS)

    Royle, Ryan; Sentoku, Yasuhiko; Mancini, Roberto

    2016-10-01

    Hard x-ray free-electron lasers (XFELs) have had a profound impact on the physical, chemical, and biological sciences. They can produce millijoule x-ray laser pulses just tens of femtoseconds in duration with more than 1012 photons each, making them the brightest laboratory x-ray sources ever produced by several orders of magnitude. An XFEL pulse can be intensified to 1020 W/cm2 when focused to submicron spot sizes, making it possible to isochorically heat solid matter well beyond 100 eV. These characteristics enable XFELs to create and probe well-characterized warm and hot dense plasmas of relevance to HED science, planetary science, laboratory astrophysics, relativistic laser plasmas, and fusion research. Several newly developed atomic physics models including photoionization, Auger ionization, and continuum-lowering have been implemented in a particle-in-cell code, PICLS, which self-consistently solves the x-ray transport, to enable the simulation of the non-LTE plasmas created by ultraintense x-ray laser interactions with solid density matter. The code is validated against the results of several recent experiments and is used to simulate the maximum-intensity x-ray heating of solid iron targets. This work was supported by DOE/OFES under Contract No. DE-SC0008827.

  11. Kinetic Modeling of Ultraintense X-Ray Laser-Matter Interactions

    NASA Astrophysics Data System (ADS)

    Royle, Ryan; Sentoku, Yasuhiko; Mancini, Roberto; Johzaki, Tomoyuki

    2015-11-01

    High-intensity XFELs have become a novel way of creating and studying hot dense plasmas. The LCLS at Stanford can deliver a millijoule of energy with more than 1012 photons in a ~ 100 femtosecond pulse. By tightly focusing the beam to a micron-scale spot size, the XFEL can be intensified to more than 1018 W/cm2, making it possible to heat solid matter isochorically beyond a million degrees (>100 eV). Such extreme states of matter are of considerable interest due to their relevance to astrophysical plasmas. Additionally, they will allow novel ways of studying equation-of-state and opacity physics under Gbar pressure and strong fields. Photoionization is the dominant x-ray absorption mechanism and triggers the heating processes. A photoionization model that takes into account the subshell cross-sections has been developed in a kinetic plasma simulation code, PICLS, that solves the x-ray transport self-consistently. The XFEL-matter interaction with several elements, including solid carbon, aluminum, and iron, is studied with the code, and the results are compared with recent LCLS experiments. This work was supported by the DOE/OFES under Contract No. DE-SC0008827.

  12. Global modeling of thermospheric airglow in the far ultraviolet

    NASA Astrophysics Data System (ADS)

    Solomon, Stanley C.

    2017-07-01

    The Global Airglow (GLOW) model has been updated and extended to calculate thermospheric emissions in the far ultraviolet, including sources from daytime photoelectron-driven processes, nighttime recombination radiation, and auroral excitation. It can be run using inputs from empirical models of the neutral atmosphere and ionosphere or from numerical general circulation models of the coupled ionosphere-thermosphere system. It uses a solar flux module, photoelectron generation routine, and the Nagy-Banks two-stream electron transport algorithm to simultaneously handle energetic electron distributions from photon and auroral electron sources. It contains an ion-neutral chemistry module that calculates excited and ionized species densities and the resulting airglow volume emission rates. This paper describes the inputs, algorithms, and code structure of the model and demonstrates example outputs for daytime and auroral cases. Simulations of far ultraviolet emissions by the atomic oxygen doublet at 135.6 nm and the molecular nitrogen Lyman-Birge-Hopfield bands, as viewed from geostationary orbit, are shown, and model calculations are compared to limb-scan observations by the Global Ultraviolet Imager on the TIMED satellite. The GLOW model code is provided to the community through an open-source academic research license.

  13. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity

    PubMed Central

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-01-01

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding. PMID:27424767

  14. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity.

    PubMed

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-07-18

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding.

  15. Assessment of ionization chamber correction factors in photon beams using a time saving strategy with PENELOPE code.

    PubMed

    Reis, C Q M; Nicolucci, P

    2016-02-01

    The purpose of this study was to investigate Monte Carlo-based perturbation and beam quality correction factors for ionization chambers in photon beams using a saving time strategy with PENELOPE code. Simulations for calculating absorbed doses to water using full spectra of photon beams impinging the whole water phantom and those using a phase-space file previously stored around the point of interest were performed and compared. The widely used NE2571 ionization chamber was modeled with PENELOPE using data from the literature in order to calculate absorbed doses to the air cavity of the chamber. Absorbed doses to water at reference depth were also calculated for providing the perturbation and beam quality correction factors for that chamber in high energy photon beams. Results obtained in this study show that simulations with phase-space files appropriately stored can be up to ten times shorter than using a full spectrum of photon beams in the input-file. Values of kQ and its components for the NE2571 ionization chamber showed good agreement with published values in the literature and are provided with typical statistical uncertainties of 0.2%. Comparisons to kQ values published in current dosimetry protocols such as the AAPM TG-51 and IAEA TRS-398 showed maximum percentage differences of 0.1% and 0.6% respectively. The proposed strategy presented a significant efficiency gain and can be applied for a variety of ionization chambers and clinical photon beams. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. SIMULATION OF ASTRONOMICAL IMAGES FROM OPTICAL SURVEY TELESCOPES USING A COMPREHENSIVE PHOTON MONTE CARLO APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, J. R.; Peng, E.; Ahmad, Z.

    2015-05-15

    We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s{sup −1}, we demonstratemore » that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.« less

  17. Photon-HDF5: an open file format for single-molecule fluorescence experiments using photon-counting detectors

    DOE PAGES

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...

    2015-12-23

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  18. Single-Photon Routing for a L-Shaped Channel

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Hou, Jiao-Jiao; Wu, Chun

    2018-02-01

    We have investigated the transport properties of a single photon scattered by a two-level atom embedded in a L-shaped waveguide, which is made of two one-dimensional (1D) semi-infinite coupled-resonator waveguides (CRWs). Single photons can be directed from one CRW to the other due to spontaneous emission of the atom. The result shows that the spontaneous emission of the TLS still routes single photon from one CRW to the other; the boundary existing makes the probability of finding single photon in a CRW could reach one. Our the scheme is helpful to construct a ring quantum networks.

  19. Time-reversal-symmetric single-photon wave packets for free-space quantum communication.

    PubMed

    Trautmann, N; Alber, G; Agarwal, G S; Leuchs, G

    2015-05-01

    Readout and retrieval processes are proposed for efficient, high-fidelity quantum state transfer between a matter qubit, encoded in the level structure of a single atom or ion, and a photonic qubit, encoded in a time-reversal-symmetric single-photon wave packet. They are based on controlling spontaneous photon emission and absorption of a matter qubit on demand in free space by stimulated Raman adiabatic passage. As these processes do not involve mode selection by high-finesse cavities or photon transport through optical fibers, they offer interesting perspectives as basic building blocks for free-space quantum-communication protocols.

  20. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  1. Integrated system for production of neutronics and photonics calculational constants. Volume 17, Part B, Rev. 1. Program SIGMA 1 (Version 78-1): Doppler broadened evaluated cross sections in the evaluated nuclear data file/Version B (ENDF/B) format. [For CDC-7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.

    1978-07-04

    The code SIGMA1 Doppler broadens evaluated cross sections in the ENDF/B format. The code can be applied only to data that vary as a linear function of energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code. 6 figures, 2 tables.

  2. Production of photons in relativistic heavy-ion collisions

    DOE PAGES

    Jean -Francois Paquet; Denicol, Gabriel S.; Shen, Chun; ...

    2016-04-18

    In this work it is shown that the use of a hydrodynamical model of heavy-ion collisions which incorporates recent developments, together with updated photon emission rates, greatly improves agreement with both ALICE and PHENIX measurements of direct photons, supporting the idea that thermal photons are the dominant source of direct photon momentum anisotropy. The event-by-event hydrodynamical model uses the impact parameter dependent Glasma model (IP-Glasma) initial states and includes, for the first time, both shear and bulk viscosities, along with second-order couplings between the two viscosities. Furthermore, the effect of both shear and bulk viscosities on the photon rates ismore » studied, and those transport coefficients are shown to have measurable consequences on the photon momentum anisotropy.« less

  3. Technique for handling wave propagation specific effects in biological tissue: mapping of the photon transport equation to Maxwell's equations.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin; Paganin, David M; Hendahewa, Priyantha R D S

    2008-10-27

    A novel algorithm for mapping the photon transport equation (PTE) to Maxwell's equations is presented. Owing to its accuracy, wave propagation through biological tissue is modeled using the PTE. The mapping of the PTE to Maxwell's equations is required to model wave propagation through foreign structures implanted in biological tissue for sensing and characterization of tissue properties. The PTE solves for only the magnitude of the intensity but Maxwell's equations require the phase information as well. However, it is possible to construct the phase information approximately by solving the transport of intensity equation (TIE) using the full multigrid algorithm.

  4. Single-pass BPM system of the Photon Factory storage ring.

    PubMed

    Honda, T; Katoh, M; Mitsuhashi, T; Ueda, A; Tadano, M; Kobayashi, Y

    1998-05-01

    At the 2.5 GeV ring of the Photon Factory, a single-pass beam-position monitor (BPM) system is being prepared for the storage ring and the beam transport line. In the storage ring, the injected beam position during the first several turns can be measured with a single injection pulse. The BPM system has an adequate performance, useful for the commissioning of the new low-emittance lattice. Several stripline BPMs are being installed in the beam transport line. The continuous monitoring of the orbit in the beam transport line will be useful for the stabilization of the injection energy as well as the injection beam orbit.

  5. X-ray compass for determining device orientation

    DOEpatents

    Da Silva, Luiz B.; Matthews, Dennis L.; Fitch, Joseph P.; Everett, Matthew J.; Colston, Billy W.; Stone, Gary F.

    1999-01-01

    An apparatus and method for determining the orientation of a device with respect to an x-ray source. In one embodiment, the present invention is coupled to a medical device in order to determine the rotational orientation of the medical device with respect to the x-ray source. In such an embodiment, the present invention is comprised of a scintillator portion which is adapted to emit photons upon the absorption of x-rays emitted from the x-ray source. An x-ray blocking portion is coupled to the scintillator portion. The x-ray blocking portion is disposed so as to vary the quantity of x-rays which penetrate the scintillator portion based upon the particular rotational orientation of the medical device with respect to the x-ray source. A photon transport mechanism is also coupled to the scintillator portion. The photon transport mechanism is adapted to pass the photons emitted from the scintillator portion to an electronics portion. By analyzing the quantity of the photons, the electronics portion determines the rotational orientation of the medical device with respect to the x-ray source.

  6. X-ray compass for determining device orientation

    DOEpatents

    Da Silva, L.B.; Matthews, D.L.; Fitch, J.P.; Everett, M.J.; Colston, B.W.; Stone, G.F.

    1999-06-15

    An apparatus and method for determining the orientation of a device with respect to an x-ray source are disclosed. In one embodiment, the present invention is coupled to a medical device in order to determine the rotational orientation of the medical device with respect to the x-ray source. In such an embodiment, the present invention is comprised of a scintillator portion which is adapted to emit photons upon the absorption of x-rays emitted from the x-ray source. An x-ray blocking portion is coupled to the scintillator portion. The x-ray blocking portion is disposed so as to vary the quantity of x-rays which penetrate the scintillator portion based upon the particular rotational orientation of the medical device with respect to the x-ray source. A photon transport mechanism is also coupled to the scintillator portion. The photon transport mechanism is adapted to pass the photons emitted from the scintillator portion to an electronics portion. By analyzing the quantity of the photons, the electronics portion determines the rotational orientation of the medical device with respect to the x-ray source. 25 figs.

  7. Transport of photons produced by lightning in clouds

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard

    1991-01-01

    The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.

  8. Solid state laser media driven by remote nuclear powered fluorescence

    DOEpatents

    Prelas, Mark A.

    1992-01-01

    An apparatus is provided for driving a solid state laser by a nuclear powered fluorescence source which is located remote from the fluorescence source. A nuclear reaction produced in a reaction chamber generates fluorescence or photons. The photons are collected from the chamber into a waveguide, such as a fiber optic waveguide. The waveguide transports the photons to the remote laser for exciting the laser.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  11. High-capacity quantum secure direct communication using hyper-entanglement of photonic qubits

    NASA Astrophysics Data System (ADS)

    Cai, Jiarui; Pan, Ziwen; Wang, Tie-Jun; Wang, Sihai; Wang, Chuan

    2016-11-01

    Hyper-entanglement is a system constituted by photons entangled in multiple degrees of freedom (DOF), being considered as a promising way of increasing channel capacity and guaranteeing powerful eavesdropping safeguard. In this work, we propose a coding scheme based on a 3-particle hyper-entanglement of polarization and orbital angular momentum (OAM) system and its application as a quantum secure direct communication (QSDC) protocol. The OAM values are specially encoded by Fibonacci sequence and the polarization carries information by defined unitary operations. The internal relations of the secret message enhances security due to principle of quantum mechanics and Fibonacci sequence. We also discuss the coding capacity and security property along with some simulation results to show its superiority and extensibility.

  12. Regimes of radiative and nonradiative transitions in transport through an electronic system in a photon cavity reaching a steady state

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Vidar; Jonsson, Thorsteinn H.; Bernodusson, Maria Laura; Abdullah, Nzar Rauf; Sitek, Anna; Goan, Hsi-Sheng; Tang, Chi-Shung; Manolescu, Andrei

    2017-01-01

    We analyze how a multilevel many-electron system in a photon cavity approaches the steady state when coupled to external leads. When a plunger gate is used to lower cavity photon dressed one- and two-electron states below the bias window defined by the external leads, we can identify one regime with nonradiative transitions dominating the electron transport, and another regime with radiative transitions. Both transitions trap the electrons in the states below the bias bringing the system into a steady state. The order of the two regimes and their relative strength depends on the location of the bias window in the energy spectrum of the system and the initial conditions.

  13. Multi-dimensional free-electron laser simulation codes : a comparison study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biedron, S. G.; Chae, Y. C.; Dejus, R. J.

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  14. Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  15. The APS SASE FEL : modeling and code comparison.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biedron, S. G.

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  16. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  17. Surface/Interface Carrier-Transport Modulation for Constructing Photon-Alternative Ultraviolet Detectors Based on Self-Bending-Assembled ZnO Nanowires.

    PubMed

    Guo, Zhen; Zhou, Lianqun; Tang, Yuguo; Li, Lin; Zhang, Zhiqi; Yang, Hongbo; Ma, Hanbin; Nathan, Arokia; Zhao, Dongxu

    2017-09-13

    Surface/interface charge-carrier generation, diffusion, and recombination/transport modulation are especially important in the construction of photodetectors with high efficiency in the field of nanoscience. In the paper, a kind of ultraviolet (UV) detector is designed based on ZnO nanostructures considering photon-trapping, surface plasmonic resonance (SPR), piezophototronic effects, interface carrier-trapping/transport control, and collection. Through carefully optimized surface/interface carrier-transport modulation, a designed device with detectivity as high as 1.69 × 10 16 /1.71 × 10 16 cm·Hz 1/2 /W irradiating with 380 nm photons under ultralow bias of 0.2 V is realized by alternating nanoparticle/nanowire active layers, respectively, and the designed UV photodetectors show fast and slow recovery processes of 0.27 and 4.52 ms, respectively, which well-satisfy practical needs. Further, it is observed that UV photodetection could be performed within an alternative response by varying correlated key parameters, through efficient surface/interface carrier-transport modulation, spectrally resolved photoresponse of the detector revealing controlled detection in the UV region based on the ZnO nanomaterial, photodetection allowed or limited by varying the active layers, irradiation distance from one of the electrodes, standing states, or electric field. The detailed carrier generation, diffusion, and recombination/transport processes are well illustrated to explain charge-carrier dynamics contributing to the photoresponse behavior.

  18. Photon Bubbles and the Vertical Structure of Accretion Disks

    NASA Astrophysics Data System (ADS)

    Begelman, Mitchell C.

    2006-06-01

    We consider the effects of ``photon bubble'' shock trains on the vertical structure of radiation pressure-dominated accretion disks. These density inhomogeneities are expected to develop spontaneously in radiation-dominated accretion disks where magnetic pressure exceeds gas pressure, even in the presence of magnetorotational instability (MRI). They increase the rate at which radiation escapes from the disk and may allow disks to exceed the Eddington limit by a substantial factor without blowing themselves apart. To refine our earlier analysis of photon bubble transport in accretion disks, we generalize the theory of photon bubbles to include the effects of finite optical depths and radiation damping. Modifications to the diffusion law at low τ tend to ``fill in'' the low-density regions of photon bubbles, while radiation damping inhibits the formation of photon bubbles at large radii, small accretion rates, and small heights above the equatorial plane. Accretion disks dominated by photon bubble transport may reach luminosities from 10 to >100 times the Eddington limit (LEdd), depending on the mass of the central object, while remaining geometrically thin. However, photon bubble-dominated disks with α-viscosity are subject to the same thermal and viscous instabilities that plague standard radiation pressure-dominated disks, suggesting that they may be intrinsically unsteady. Photon bubbles can lead to a ``core-halo'' vertical disk structure. In super-Eddington disks the halo forms the base of a wind, which carries away substantial energy and mass, but not enough to prevent the luminosity from exceeding LEdd. Photon bubble-dominated disks may have smaller color corrections than standard accretion disks of the same luminosity. They remain viable contenders for some ultraluminous X-ray sources and may play a role in the rapid growth of supermassive black holes at high redshift.

  19. Effects of model approximations for electron, hole, and photon transport in swift heavy ion tracks

    NASA Astrophysics Data System (ADS)

    Rymzhanov, R. A.; Medvedev, N. A.; Volkov, A. E.

    2016-12-01

    The event-by-event Monte Carlo code, TREKIS, was recently developed to describe excitation of the electron subsystems of solids in the nanometric vicinity of a trajectory of a nonrelativistic swift heavy ion (SHI) decelerated in the electronic stopping regime. The complex dielectric function (CDF) formalism was applied in the used cross sections to account for collective response of a matter to excitation. Using this model we investigate effects of the basic assumptions on the modeled kinetics of the electronic subsystem which ultimately determine parameters of an excited material in an SHI track. In particular, (a) effects of different momentum dependencies of the CDF on scattering of projectiles on the electron subsystem are investigated. The 'effective one-band' approximation for target electrons produces good coincidence of the calculated electron mean free paths with those obtained in experiments in metals. (b) Effects of collective response of a lattice appeared to dominate in randomization of electron motion. We study how sensitive these effects are to the target temperature. We also compare results of applications of different model forms of (quasi-) elastic cross sections in simulations of the ion track kinetics, e.g. those calculated taking into account optical phonons in the CDF form vs. Mott's atomic cross sections. (c) It is demonstrated that the kinetics of valence holes significantly affects redistribution of the excess electronic energy in the vicinity of an SHI trajectory as well as its conversion into lattice excitation in dielectrics and semiconductors. (d) It is also shown that induced transport of photons originated from radiative decay of core holes brings the excess energy faster and farther away from the track core, however, the amount of this energy is relatively small.

  20. High-speed and high-efficiency travelling wave single-photon detectors embedded in nanophotonic circuits

    PubMed Central

    Pernice, W.H.P.; Schuck, C.; Minaeva, O.; Li, M.; Goltsman, G.N.; Sergienko, A.V.; Tang, H.X.

    2012-01-01

    Ultrafast, high-efficiency single-photon detectors are among the most sought-after elements in modern quantum optics and quantum communication. However, imperfect modal matching and finite photon absorption rates have usually limited their maximum attainable detection efficiency. Here we demonstrate superconducting nanowire detectors atop nanophotonic waveguides, which enable a drastic increase of the absorption length for incoming photons. This allows us to achieve high on-chip single-photon detection efficiency up to 91% at telecom wavelengths, repeatable across several fabricated chips. We also observe remarkably low dark count rates without significant compromise of the on-chip detection efficiency. The detectors are fully embedded in scalable silicon photonic circuits and provide ultrashort timing jitter of 18 ps. Exploiting this high temporal resolution, we demonstrate ballistic photon transport in silicon ring resonators. Our direct implementation of a high-performance single-photon detector on chip overcomes a major barrier in integrated quantum photonics. PMID:23271658

  1. Prospective of Photon Propulsion for Interstellar Flight

    NASA Astrophysics Data System (ADS)

    Bae, Young K.

    Mastering photon propulsion is proposed to be the key to overcoming the limit of the current propulsion technology based on conventional rocketry and potentially opening a new space era. A perspective on photon propulsion is presented here to elucidate that interstellar manned roundtrip flight could be achievable in a century within a frame of exiting scientific principles, once the required existing technologies are further developed. It is shown that the developmental pathway towards the interstellar flight demands not only technological breakthroughs, but consistent long-term world-scale economic interest and investment. Such interest and investment will result from positive financial returns from routine interstellar commutes that can transport highly valuable commodities in a profitable manner. The Photonic Railway, a permanent energy-efficient transportation structure based on the Beamed-Laser Propulsion (BLP) by Forward and the Photonic Laser Thruster (PLT) by the author, is proposed to enable such routine interstellar commutes via Spacetrains. A four-phased evolutionary developmental pathway towards the Interstellar Photonic Railway is proposed. Each phase poses evolutionary, yet daunting, technological and financial challenges that need to be overcome within each time frame of 20 _ 30 years, and is projected to generate multitudes of applications that would lead to sustainable reinvestment into its development. If successfully developed, the Photonic Railway would bring about a quantum leap in the human economic and social interests in space from explorations to terraforming, mining, colonization, and permanent habitation in exoplanets.

  2. Controlling resonant photonic transport along optical waveguides by two-level atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan Conghua; College of Physics and Electronic Engineering, Sichuan Normal University, Chengdu 610068; Wei Lianfu

    2011-10-15

    Recent works [Shen et al., Phys. Rev. Lett. 95, 213001 (2005); Zhou et al., Phys. Rev. Lett. 101, 100501 (2008)] showed that the incident photons cannot transmit along an optical waveguide containing a resonant two-level atom (TLA). Here we propose an approach to overcome such a difficulty by using asymmetric couplings between the photons and a TLA. Our numerical results show that the transmission spectrum of the photon depends on both the frequency of the incident photons and the photon-TLA couplings. Consequently, this system can serve as a controllable photon attenuator, by which the transmission probability of the resonantly incidentmore » photons can be changed from 0% to 100%. A possible application to explain the recent experimental observations [Astafiev et al., Science 327, 840 (2010)] is also discussed.« less

  3. 76 FR 2744 - Disclosure of Code-Share Service by Air Carriers and Sellers of Air Transportation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... DEPARTMENT OF TRANSPORTATION Office of the Secretary Disclosure of Code-Share Service by Air Carriers and Sellers of Air Transportation AGENCY: Office of the Secretary, Department of Transportation..., their agents, and third party sellers of air transportation in view of recent amendments to 49 U.S.C...

  4. Implicitly causality enforced solution of multidimensional transient photon transport equation.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin

    2009-12-21

    A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided.

  5. Computer Code for Transportation Network Design and Analysis

    DOT National Transportation Integrated Search

    1977-01-01

    This document describes the results of research into the application of the mathematical programming technique of decomposition to practical transportation network problems. A computer code called Catnap (for Control Analysis Transportation Network A...

  6. High Power Orbit Transfer Vehicle

    DTIC Science & Technology

    2003-07-01

    multijunction device is a stack of individual single-junction cells in descending order of band gap. The top cell captures the high-energy photons and passes...the rest of the photons on to be absorbed by lower-band-gap cells. Multijunction devices achieve a higher total conversion efficiency because they...minimum temperatures on the thruster modules and main bus. In the MATLAB code for these calculations, maximum and minimum temperatures are plotted

  7. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.

    PubMed

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-05

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments

    PubMed Central

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-01

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406

  9. Quantitative analysis of optical properties of flowing blood using a photon-cell interactive Monte Carlo code: effects of red blood cells' orientation on light scattering.

    PubMed

    Sakota, Daisuke; Takatani, Setsuo

    2012-05-01

    Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.

  10. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  11. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  12. Characterization of the neutron irradiation system for use in the Low-Dose-Rate Irradiation Facility at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franco, Manuel

    The objective of this work was to characterize the neutron irradiation system consisting of americium-241 beryllium (241AmBe) neutron sources placed in a polyethylene shielding for use at Sandia National Laboratories (SNL) Low Dose Rate Irradiation Facility (LDRIF). With a total activity of 0.3 TBq (9 Ci), the source consisted of three recycled 241AmBe sources of different activities that had been combined into a single source. The source in its polyethylene shielding will be used in neutron irradiation testing of components. The characterization of the source-shielding system was necessary to evaluate the radiation environment for future experiments. Characterization of the sourcemore » was also necessary because the documentation for the three component sources and their relative alignment within the Special Form Capsule (SFC) was inadequate. The system consisting of the source and shielding was modeled using Monte Carlo N-Particle transport code (MCNP). The model was validated by benchmarking it against measurements using multiple techniques. To characterize the radiation fields over the full spatial geometry of the irradiation system, it was necessary to use a number of instruments of varying sensitivities. First, the computed photon radiography assisted in determining orientation of the component sources. With the capsule properly oriented inside the shielding, the neutron spectra were measured using a variety of techniques. A N-probe Microspec and a neutron Bubble Dosimeter Spectrometer (BDS) set were used to characterize the neutron spectra/field in several locations. In the third technique, neutron foil activation was used to ascertain the neutron spectra. A high purity germanium (HPGe) detector was used to characterize the photon spectrum. The experimentally measured spectra and the MCNP results compared well. Once the MCNP model was validated to an adequate level of confidence, parametric analyses was performed on the model to optimize for potential experimental configurations and neutron spectra for component irradiation. The final product of this work is a MCNP model validated by measurements, an overall understanding of neutron irradiation system including photon/neutron transport and effective dose rates throughout the system, and possible experimental configurations for future irradiation of components.« less

  13. Effect on radioactivity concentration estimation of radon progenies with NaI(Tl) pulse height distribution from considering geometric structure around detector and infiltration of radionuclides.

    PubMed

    Hirouchi, J; Terasaka, Y; Hirao, S; Moriizumi, J; Yamazawa, H

    2015-11-01

    The surface radioactivity concentrations of the radon progenies, (214)Pb and (214)Bi, were estimated from NaI(Tl) pulse height distributions during rain. The improvement in estimation errors caused by considering geometric structures around measuring points and infiltration of radionuclides was discussed. The surface radioactivity concentrations were determined by comparing the count rates at the full-energy peak ranges between observation and calculation with the electron-photon transport code EGS5. It was shown that the concentrations can be underestimated by about 30 % unless the obstacles around the detector or infiltration of radionuclides are considered in gamma ray transfer calculations at measuring points, where there are many tall obstacles, or the ground is covered with unpaved areas. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. High Energy Electron Detectors on Sphinx

    NASA Astrophysics Data System (ADS)

    Thompson, J. R.; Porte, A.; Zucchini, F.; Calamy, H.; Auriel, G.; Coleman, P. L.; Bayol, F.; Lalle, B.; Krishnan, M.; Wilson, K.

    2008-11-01

    Z-pinch plasma radiation sources are used to dose test objects with K-shell (˜1-4keV) x-rays. The implosion physics can produce high energy electrons (> 50keV), which could distort interpretation of the soft x-ray effects. We describe the design and implementation of a diagnostic suite to characterize the electron environment of Al wire and Ar gas puff z-pinches on Sphinx. The design used ITS calculations to model detector response to both soft x-rays and electrons and help set upper bounds to the spurious electron flux. Strategies to discriminate between the known soft x-ray emission and the suspected electron flux will be discussed. H.Calamy et al, ``Use of microsecond current prepulse for dramatic improvements of wire array Z-pinch implosion,'' Phys Plasmas 15, 012701 (2008) J.A.Halbleib et al, ``ITS: the integrated TIGER series of electron/photon transport codes-Version 3.0,'' IEEE Trans on Nuclear Sci, 39, 1025 (1992)

  15. An investigation of nonuniform dose deposition from an electron beam

    NASA Astrophysics Data System (ADS)

    Lilley, William; Luu, Kieu X.

    1994-08-01

    In a search for an explanation of nonuniform electron-beam dose deposition, the integrated tiger series (ITS) of coupled electron/photon Monte Carlo transport codes was used to calculate energy deposition in the package materials of an application-specific integrated circuit (ASIC) while the thicknesses of some of the materials were varied. The thicknesses of three materials that were in the path of an electron-beam pulse were varied independently so that analysis could determine how the radiation dose measurements using thermoluminescent dosimeters (TLD's) would be affected. The three materials were chosen because they could vary during insertion of the die into the package or during the process of taking dose measurements. The materials were aluminum, HIPEC (a plastic), and silver epoxy. The calculations showed that with very small variations in thickness, the silver epoxy had a large effect on the dose uniformity over the area of the die.

  16. Primary and secondary particle contributions to the depth dose distribution in a phantom shielded from solar flare and Van Allen protons

    NASA Technical Reports Server (NTRS)

    Santoro, R. T.; Claiborne, H. C.; Alsmiller, R. G., Jr.

    1972-01-01

    Calculations have been made using the nucleon-meson transport code NMTC to estimate the absorbed dose and dose equivalent distributions in astronauts inside space vehicles bombarded by solar flare and Van Allen protons. A spherical shell shield of specific radius and thickness with a 30-cm-diam. tissue ball at the geometric center was used to simulate the spacecraft-astronaut configuration. The absorbed dose and the dose equivalent from primary protons, secondary protons, heavy nuclei, charged pions, muons, photons, and positrons and electrons are given as a function of depth in the tissue phantom. Results are given for solar flare protons with a characteristic rigidity of 100 MV and for Van Allen protons in a 240-nautical-mile circular orbit at 30 degree inclination angle incident on both 20-g/sq cm-thick aluminum and polyethylene spherical shell shields.

  17. Measuring and interpreting X-ray fluorescence from planetary surfaces.

    PubMed

    Owens, Alan; Beckhoff, Burkhard; Fraser, George; Kolbe, Michael; Krumrey, Michael; Mantero, Alfonso; Mantler, Michael; Peacock, Anthony; Pia, Maria-Grazia; Pullan, Derek; Schneider, Uwe G; Ulm, Gerhard

    2008-11-15

    As part of a comprehensive study of X-ray emission from planetary surfaces and in particular the planet Mercury, we have measured fluorescent radiation from a number of planetary analog rock samples using monochromatized synchrotron radiation provided by the BESSY II electron storage ring. The experiments were carried out using a purpose built X-ray fluorescence (XRF) spectrometer chamber developed by the Physikalisch-Technische Bundesanstalt, Germany's national metrology institute. The XRF instrumentation is absolutely calibrated and allows for reference-free quantitation of rock sample composition, taking into account secondary photon- and electron-induced enhancement effects. The fluorescence data, in turn, have been used to validate a planetary fluorescence simulation tool based on the GEANT4 transport code. This simulation can be used as a mission analysis tool to predict the time-dependent orbital XRF spectral distributions from planetary surfaces throughout the mapping phase.

  18. Probing the Accretion Geometry of Black Holes with X-Ray Polarization

    NASA Technical Reports Server (NTRS)

    Schnitman, Jeremy D.

    2011-01-01

    In the coming years, new space missions will be able to measure X-ray polarization at levels of 1% or better in the approx.1-10 keV energy band. In particular, X-ray polarization is an ideal tool for determining the nature of black hole (BH) accretion disks surrounded by hot coronae. Using a Monte Carlo radiation transport code in full general relativity, we calculate the spectra and polarization features of these BH systems. At low energies, the signal is dominated by the thermal flux coming directly from the optically thick disk. At higher energies, the thermal seed photons have been inverse-Compton scattered by the corona, often reflecting back off the disk before reaching the observer, giving a distinctive polarization signature. By measuring the degree and angle of this X-ray polarization, we can infer the BH inclination, the emission geometry of the accretion flow, and also determine the spin of the black hole.

  19. Contrast cancellation technique applied to digital x-ray imaging using silicon strip detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, C.; Lopez, J.; Sanabria, J. C.

    2005-12-15

    Dual-energy mammographic imaging experimental tests have been performed using a compact dichromatic imaging system based on a conventional x-ray tube, a mosaic crystal, and a 384-strip silicon detector equipped with full-custom electronics with single photon counting capability. For simulating mammal tissue, a three-component phantom, made of Plexiglass, polyethylene, and water, has been used. Images have been collected with three different pairs of x-ray energies: 16-32 keV, 18-36 keV, and 20-40 keV. A Monte Carlo simulation of the experiment has also been carried out using the MCNP-4C transport code. The Alvarez-Macovski algorithm has been applied both to experimental and simulated datamore » to remove the contrast between two of the phantom materials so as to enhance the visibility of the third one.« less

  20. Large-scale two-photon imaging revealed super-sparse population codes in the V1 superficial layer of awake monkeys.

    PubMed

    Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing

    2018-04-26

    One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.

  1. Photon Throughput Calculations for a Spherical Crystal Spectrometer

    NASA Astrophysics Data System (ADS)

    Gilman, C. J.; Bitter, M.; Delgado-Aparicio, L.; Efthimion, P. C.; Hill, K.; Kraus, B.; Gao, L.; Pablant, N.

    2017-10-01

    X-ray imaging crystal spectrometers of the type described in Refs. have become a standard diagnostic for Doppler measurements of profiles of the ion temperature and the plasma flow velocities in magnetically confined, hot fusion plasmas. These instruments have by now been implemented on major tokamak and stellarator experiments in Korea, China, Japan, and Germany and are currently also being designed by PPPL for ITER. A still missing part in the present data analysis is an efficient code for photon throughput calculations to evaluate the chord-integrated spectral data. The existing ray tracing codes cannot be used for a data analysis between shots, since they require extensive and time consuming numerical calculations. Here, we present a detailed analysis of the geometrical properties of the ray pattern. This method allows us to minimize the extent of numerical calculations and to create a more efficient code. This work was performed under the auspices of the U.S. Department of Energy by Princeton Plasma Physics Laboratory under contract DE-AC02-09CH11466.

  2. BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) - Generation Methodology and Preliminary Testing of two ENEA-Bologna Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Sinitsa, Valentin; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    Two broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format, dedicated to LWR shielding and pressure vessel dosimetry applications, were generated following the methodology recommended by the US ANSI/ANS-6.1.2-1999 (R2009) standard. These libraries, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, are respectively based on JEFF-3.1.1 and ENDF/B-VII.0 nuclear data and adopt the same broad-group energy structure (47 n + 20 γ) of the ORNL BUGLE-96 similar library. They were respectively obtained from the ENEA-Bologna VITJEFF311.BOLIB and VITENDF70.BOLIB libraries in AMPX format for nuclear fission applications through problem-dependent cross section collapsing with the ENEA-Bologna 2007 revision of the ORNL SCAMPI nuclear data processing system. Both previous libraries are based on the Bondarenko self-shielding factor method and have the same AMPX format and fine-group energy structure (199 n + 42 γ) as the ORNL VITAMIN-B6 similar library from which BUGLE-96 was obtained at ORNL. A synthesis of a preliminary validation of the cited BUGLE-type libraries, performed through 3D fixed source transport calculations with the ORNL TORT-3.2 SN code, is included. The calculations were dedicated to the PCA-Replica 12/13 and VENUS-3 engineering neutron shielding benchmark experiments, specifically conceived to test the accuracy of nuclear data and transport codes in LWR shielding and radiation damage analyses.

  3. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-02-01

    Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less

  4. Multi-threading performance of Geant4, MCNP6, and PHITS Monte Carlo codes for tetrahedral-mesh geometry.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya

    2018-05-04

    In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N  =  40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N  =  40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N  =  40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N  =  40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the phantom and the number of threads, hardly increasing with the number of threads for the MRCP.

  5. Spin-dependent heat and thermoelectric currents in a Rashba ring coupled to a photon cavity

    NASA Astrophysics Data System (ADS)

    Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

    2018-01-01

    Spin-dependent heat and thermoelectric currents in a quantum ring with Rashba spin-orbit interaction placed in a photon cavity are theoretically calculated. The quantum ring is coupled to two external leads with different temperatures. In a resonant regime, with the ring structure in resonance with the photon field, the heat and the thermoelectric currents can be controlled by the Rashba spin-orbit interaction. The heat current is suppressed in the presence of the photon field due to contribution of the two-electron and photon replica states to the transport while the thermoelectric current is not sensitive to changes in parameters of the photon field. Our study opens a possibility to use the proposed interferometric device as a tunable heat current generator in the cavity photon field.

  6. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  7. User's manual for a material transport code on the Octopus Computer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.; Mendez, G.D.

    1978-09-15

    A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.

  8. Squeezing as a route to photonic analogues of topological superconductors

    NASA Astrophysics Data System (ADS)

    Houde, Martin; Peano, Vittorio; Brendel, Christian; Marquardt, Florian; Clerk, Aashish

    There has been considerable recent interest in studying topological phases of photonic systems. In many cases the resulting system is described by a quadratic particle-conserving Hamiltonian which is directly equivalent to its fermionic counterpart. Here, we consider a class of photonic topological phases where this correspondence fails: photonic systems where particle-number non-conserving terms break time-reversal symmetry. We show that these phases support protected edge modes which facilitate chiral inelastic and elastic transport channels. We also discuss the possibility of quantum amplification using these edge states. Our system could be realized in a variety of systems, including nonlinear photonic crystals, superconducting circuits and optomechanical systems.

  9. Accelerator shield design of KIPT neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Gohar, Y.

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary at less than 0.5-mrem/hr. The shield configuration and parameters of the accelerator building have been determined and are presented in this paper. (authors)« less

  10. TH-CD-201-02: A Monte Carlo Investigation of a Novel Detector Arrangement for the Energy Spectrum Measurement of a 6MV Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taneja, S; Bartol, L; Culberson, W

    2016-06-15

    Purpose: Direct measurement of the energy spectrum of a 6MV linear accelerator has not been successful due to the high fluence rate, high energy nature of these photon beams. Previous work used a Compton Scattering (CS) spectrometry setup with a shielded spectrometer for spectrum measurements. Despite substantial lead shielding, excessive pulse pile-up was seen. MCNP6 transport code was used to investigate the feasibility and effectiveness of performing measurements using a novel detector setup. Methods: Simulations were performed with a shielded high-purity germanium (HPGe) semiconductor detector placed in the accelerator vault’s maze, with a 2 cm diameter collimator through a 92more » cm thick concrete wall. The detector was positioned 660 cm from a scattering rod (placed at isocenter) at an angle of 45° relative to the central axis. This setup was compared with the shielded detector positioned in the room, 200 cm from the scattering rod at the same CS angle. Simulations were used to determine fluence contributions from three sources: (1) CS photons traveling through the collimator aperture, the intended signal, (2) CS scatter photons penetrating the detector shield, and (3) room-scattered photons penetrating the detector shield. Variance reduction techniques including weight windows, DXTRAN spheres, forced collisions, and energy cutoffs were used. Results: Simulations showed that the number of pulses per starting particle from an F8 detector tally for the intended signal decreased by a factor of 10{sup 2} when moving the detector out of the vault. This reduction in signal was amplified for the unwanted scatter signal which decreased by up to a factor of 10{sup 9}. Conclusion: This work used MCNP6 to show that using a vault wall to shield unwanted scatter and increasing isocenter-to-detector distance reduces unwanted fluence to the detector. This study aimed to provide motivation for future experimental work using the proposed setup.« less

  11. A Model for Atomic and Molecular Interstellar Gas: The Meudon PDR Code

    NASA Astrophysics Data System (ADS)

    Le Petit, Franck; Nehmé, Cyrine; Le Bourlot, Jacques; Roueff, Evelyne

    2006-06-01

    We present the revised ``Meudon'' model of photon-dominated region (PDR) code, available on the Web under the GNU Public License. General organization of the code is described down to a level that should allow most observers to use it as an interpretation tool with minimal help from our part. Two grids of models, one for low-excitation diffuse clouds and one for dense highly illuminated clouds, are discussed, and some new results on PDR modelization highlighted.

  12. Comparison of IPSM 1990 photon dosimetry code of practice with IAEA TRS‐398 and AAPM TG‐51.

    PubMed Central

    Henríquez, Francisco Cutanda

    2009-01-01

    Several codes of practice for photon dosimetry are currently used around the world, supported by different organizations. A comparison of IPSM 1990 with both IAEA TRS‐398 and AAPM TG‐51 has been performed. All three protocols are based on the calibration of ionization chambers in terms of standards of absorbed dose to water, as it is the case with other modern codes of practice. This comparison has been carried out for photon beams of nominal energies: 4 MV, 6 MV, 8 MV, 10 MV and 18 MV. An NE 2571 graphite ionization chamber was used in this study, cross‐calibrated against an NE 2611A Secondary Standard, calibrated in the National Physical Laboratory (NPL). Absolute dose in reference conditions was obtained using each of these three protocols including: beam quality indices, beam quality conversion factors both theoretical and NPL experimental ones, correction factors for influence quantities and absolute dose measurements. Each protocol recommendations have been strictly followed. Uncertainties have been obtained according to the ISO Guide to the Expression of Uncertainty in Measurement. Absorbed dose obtained according to all three protocols agree within experimental uncertainty. The largest difference between absolute dose results for two protocols is obtained for the highest energy: 0.7% between IPSM 1990 and IAEA TRS‐398 using theoretical beam quality conversion factors. PACS number: 87.55.tm

  13. Comparison of heavy-ion transport simulations: Collision integral in a box

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen

    2018-03-01

    Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.

  14. Photon diffusion coefficient in scattering and absorbing media.

    PubMed

    Pierrat, Romain; Greffet, Jean-Jacques; Carminati, Rémi

    2006-05-01

    We present a unified derivation of the photon diffusion coefficient for both steady-state and time-dependent transport in disordered absorbing media. The derivation is based on a modal analysis of the time-dependent radiative transfer equation. This approach confirms that the dynamic diffusion coefficient is given by the random-walk result D = cl(*)/3, where l(*) is the transport mean free path and c is the energy velocity, independent of the level of absorption. It also shows that the diffusion coefficient for steady-state transport, often used in biomedical optics, depends on absorption, in agreement with recent theoretical and experimental works. These two results resolve a recurrent controversy in light propagation and imaging in scattering media.

  15. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  16. The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, T.E.; Mihalczo, J.T.

    1995-12-31

    This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.

  17. High-order dispersion effects in two-photon interference

    NASA Astrophysics Data System (ADS)

    Mazzotta, Zeudi; Cialdi, Simone; Cipriani, Daniele; Olivares, Stefano; Paris, Matteo G. A.

    2016-12-01

    Two-photon interference and Hong-Ou-Mandel (HOM) effect are relevant tools for quantum metrology and quantum information processing. In optical coherence tomography, the HOM effect is exploited to achieve high-resolution measurements with the width of the HOM dip being the main parameter. On the other hand, applications like dense coding require high-visibility performance. Here we address high-order dispersion effects in two-photon interference and study, theoretically and experimentally, the dependence of the visibility and the width of the HOM dip on both the pump spectrum and the downconverted photon spectrum. In particular, a spatial light modulator is exploited to experimentally introduce and manipulate a custom phase function to simulate the high-order dispersion effects. Overall, we show that it is possible to effectively introduce high-order dispersion effects on the propagation of photons and also to compensate for such effect. Our results clarify the role of the different dispersion phenomena and pave the way for optimization procedures in quantum technological applications involving PDC photons and optical fibers.

  18. Nanoporous hard data: optical encoding of information within nanoporous anodic alumina photonic crystals.

    PubMed

    Santos, Abel; Law, Cheryl Suwen; Pereira, Taj; Losic, Dusan

    2016-04-21

    Herein, we present a method for storing binary data within the spectral signature of nanoporous anodic alumina photonic crystals. A rationally designed multi-sinusoidal anodisation approach makes it possible to engineer the photonic stop band of nanoporous anodic alumina with precision. As a result, the transmission spectrum of these photonic nanostructures can be engineered to feature well-resolved and selectively positioned characteristic peaks across the UV-visible spectrum. Using this property, we implement an 8-bit binary code and assess the versatility and capability of this system by a series of experiments aiming to encode different information within the nanoporous anodic alumina photonic crystals. The obtained results reveal that the proposed nanosized platform is robust, chemically stable, versatile and has a set of unique properties for data storage, opening new opportunities for developing advanced nanophotonic tools for a wide range of applications, including sensing, photonic tagging, self-reporting drug releasing systems and secure encoding of information.

  19. Tunable photonic band gaps and optical nonreciprocity by an RF-driving ladder-type system in moving optical lattice

    NASA Astrophysics Data System (ADS)

    Ba, Nuo; Zhong, Xin; Wang, Lei; Fei, Jin-You; Zhang, Yan; Bao, Qian-Qian; Xiao, Li

    2018-03-01

    We investigate photonic transport properties of the 1D moving optical lattices filled with vast cold atoms driven into a four-level ladder-type system and obtain dynamically controlled photonic bandgaps and optical nonreciprocity. It is found that the two obvious optical nonreciprocity can be generated at two well-developed photonic bandgaps based on double dark states in the presence of a radio-frequency field. However, when the radio-frequency field is absence, the only one induced photonic bandgaps with distinguishing optical nonreciprocity can be opened up via single dark state. Dynamic control of the induced photonic bandgaps and optical nonreciprocity could be exploited to achieve all-optical diodes and routing for quantum information networks.

  20. 49 CFR Appendix C to Part 229 - FRA Locomotive Standards-Code of Defects

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false FRA Locomotive Standards-Code of Defects C Appendix C to Part 229 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Pt. 229, App. C...

  1. 49 CFR Appendix C to Part 229 - FRA Locomotive Standards-Code of Defects

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false FRA Locomotive Standards-Code of Defects C Appendix C to Part 229 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Pt. 229, App. C...

  2. 49 CFR Appendix C to Part 229 - FRA Locomotive Standards-Code of Defects

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false FRA Locomotive Standards-Code of Defects C Appendix C to Part 229 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD LOCOMOTIVE SAFETY STANDARDS Pt. 229, App. C...

  3. The photon beam transport and diagnostics system at FERMI@Elettra, the Italian seeded FEL source: commissioning experience and most recent results

    NASA Astrophysics Data System (ADS)

    Zangrando, Marco; Abrami, Alessandro; Cocco, Daniele; Fava, Claudio; Gerusina, Simone; Gobessi, Riccardo; Mahne, Nicola; Mazzucco, Eric; Raimondi, Lorenzo; Rumiz, Luca; Svetina, Cristian; Parmigiani, Fulvio

    2012-10-01

    FERMI@Elettra, the Italian Free Electron Laser (FEL) source, is in an advanced commissioning phase, having already delivered radiation down to the endstations. The facility is routinely using the low energy branch (FEL1) to produce photons in the 65-20 nm range, while the 20-4 nm range will be covered by FEL2 that is now being commissioned. A dedicated system to collect, diagnose, transport and focus the radiation (PADReS) is used to provide informations about the photon beam intensity, position, spectral content, transverse coherence, and so on. The experience gathered so far, as well as the most recent results both from the diagnostic section and the beam manipulation part are presented here.

  4. Controlling resonant photonic transport along optical waveguides by two-level atoms

    NASA Astrophysics Data System (ADS)

    Yan, Cong-Hua; Wei, Lian-Fu; Jia, Wen-Zhi; Shen, Jung-Tsung

    2011-10-01

    Recent works [Shen , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.95.213001 95, 213001 (2005); Zhou , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.101.100501 101, 100501 (2008)] showed that the incident photons cannot transmit along an optical waveguide containing a resonant two-level atom (TLA). Here we propose an approach to overcome such a difficulty by using asymmetric couplings between the photons and a TLA. Our numerical results show that the transmission spectrum of the photon depends on both the frequency of the incident photons and the photon-TLA couplings. Consequently, this system can serve as a controllable photon attenuator, by which the transmission probability of the resonantly incident photons can be changed from 0% to 100%. A possible application to explain the recent experimental observations [Astafiev , ScienceSCIEAS0036-807510.1126/science.1181918 327, 840 (2010)] is also discussed.

  5. Determining the mass attenuation coefficient, effective atomic number, and electron density of raw wood and binderless particleboards of Rhizophora spp. by using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh

    Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.

  6. 49 CFR Appendix C to Part 215 - FRA Freight Car Standards Defect Code

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false FRA Freight Car Standards Defect Code C Appendix C... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD FREIGHT CAR SAFETY STANDARDS Pt. 215, App. C Appendix C to Part 215—FRA Freight Car Standards Defect Code The following defect code has been established for use...

  7. 49 CFR Appendix C to Part 215 - FRA Freight Car Standards Defect Code

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false FRA Freight Car Standards Defect Code C Appendix C... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD FREIGHT CAR SAFETY STANDARDS Pt. 215, App. C Appendix C to Part 215—FRA Freight Car Standards Defect Code The following defect code has been established for use...

  8. 49 CFR Appendix C to Part 215 - FRA Freight Car Standards Defect Code

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false FRA Freight Car Standards Defect Code C Appendix C... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD FREIGHT CAR SAFETY STANDARDS Pt. 215, App. C Appendix C to Part 215—FRA Freight Car Standards Defect Code The following defect code has been established for use...

  9. 49 CFR Appendix C to Part 215 - FRA Freight Car Standards Defect Code

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false FRA Freight Car Standards Defect Code C Appendix C... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD FREIGHT CAR SAFETY STANDARDS Pt. 215, App. C Appendix C to Part 215—FRA Freight Car Standards Defect Code The following defect code has been established for use...

  10. Methods of treating complex space vehicle geometry for charged particle radiation transport

    NASA Technical Reports Server (NTRS)

    Hill, C. W.

    1973-01-01

    Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.

  11. Comparisons of anomalous and collisional radial transport with a continuum kinetic edge code

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S.; Cohen, R.; Rognlien, T.

    2009-05-01

    Modeling of anomalous (turbulence-driven) radial transport in controlled-fusion plasmas is necessary for long-time transport simulations. Here the focus is continuum kinetic edge codes such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory, but the model also has wider application. Our previously developed anomalous diagonal transport matrix model with velocity-dependent convection and diffusion coefficients allows contact with typical fluid transport models (e.g., UEDGE). Results are presented that combine the anomalous transport model and collisional transport owing to ion drift orbits utilizing a Krook collision operator that conserves density and energy. Comparison is made of the relative magnitudes and possible synergistic effects of the two processes for typical tokamak device parameters.

  12. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  13. Physics of cosmological cascades and observable properties

    NASA Astrophysics Data System (ADS)

    Fitoussi, T.; Belmont, R.; Malzac, J.; Marcowith, A.; Cohen-Tanugi, J.; Jean, P.

    2017-04-01

    TeV photons from extragalactic sources are absorbed in the intergalactic medium and initiate electromagnetic cascades. These cascades offer a unique tool to probe the properties of the universe at cosmological scales. We present a new Monte Carlo code dedicated to the physics of such cascades. This code has been tested against both published results and analytical approximations, and is made publicly available. Using this numerical tool, we investigate the main cascade properties (spectrum, halo extension and time delays), and study in detail their dependence on the physical parameters (extragalactic magnetic field, extragalactic background light, source redshift, source spectrum and beaming emission). The limitations of analytical solutions are emphasized. In particular, analytical approximations account only for the first generation of photons and higher branches of the cascade tree are neglected.

  14. A strongly interacting polaritonic quantum dot

    NASA Astrophysics Data System (ADS)

    Jia, Ningyuan; Schine, Nathan; Georgakopoulos, Alexandros; Ryou, Albert; Clark, Logan W.; Sommer, Ariel; Simon, Jonathan

    2018-06-01

    Polaritons are promising constituents of both synthetic quantum matter1 and quantum information processors2, whose properties emerge from their components: from light, polaritons draw fast dynamics and ease of transport; from matter, they inherit the ability to collide with one another. Cavity polaritons are particularly promising as they may be confined and subjected to synthetic magnetic fields controlled by cavity geometry3, and furthermore they benefit from increased robustness due to the cavity enhancement in light-matter coupling. Nonetheless, until now, cavity polaritons have operated only in a weakly interacting mean-field regime4,5. Here we demonstrate strong interactions between individual cavity polaritons enabled by employing highly excited Rydberg atoms as the matter component of the polaritons. We assemble a quantum dot composed of approximately 150 strongly interacting Rydberg-dressed 87Rb atoms in a cavity, and observe blockaded transport of photons through it. We further observe coherent photon tunnelling oscillations, demonstrating that the dot is zero-dimensional. This work establishes the cavity Rydberg polariton as a candidate qubit in a photonic information processor and, by employing multiple resonator modes as the spatial degrees of freedom of a photonic particle, the primary ingredient to form photonic quantum matter6.

  15. The FERMIatElettra FEL Photon Transport System

    NASA Astrophysics Data System (ADS)

    Zangrando, M.; Cudin, I.; Fava, C.; Godnig, R.; Kiskinova, M.; Masciovecchio, C.; Parmigiani, F.; Rumiz, L.; Svetina, C.; Turchet, A.; Cocco, D.

    2010-06-01

    The FERMI@Elettra free electron laser (FEL) user facility is under construction at Sincrotrone Trieste (Italy), and it will be operative in late 2010. It is based on a seeded scheme providing an almost perfect transform-limited and fully spatially coherent photon beam. FERMI@Elettra will cover the wavelength range 100 to 3 nm with the fundamental harmonics, and down to 1 nm with higher harmonics. We present the layout of the photon beam transport system that includes: the first common part providing on-line and shot-to-shot beam diagnostics, called PADReS (Photon Analysis Delivery and Reduction System), and 3 independent beamlines feeding the experimental stations. Particular emphasis is given to the solutions adopted to preserve the wavefront, and to avoid damage on the different optical elements. Peculiar FEL devices, not common in the Synchrotron Radiation facilities, are described in more detail, e.g. the online photon energy spectrometer measuring shot-by-shot the spectrum of the emitted radiation, the beam splitting and delay line system dedicated to cross/auto correlation and pump-probe experiments, and the wavefront preserving active optics adapting the shape and size of the focused spot to meet the needs of the different experiments.

  16. Two-photon excited fluorescence emission from hemoglobin

    NASA Astrophysics Data System (ADS)

    Sun, Qiqi; Zeng, Yan; Zhang, Wei; Zheng, Wei; Luo, Yi; Qu, Jianan Y.

    2015-03-01

    Hemoglobin, one of the most important proteins in blood, is responsible for oxygen transportation in almost all vertebrates. Recently, we discovered two-photon excited hemoglobin fluorescence and achieved label-free microvascular imaging based on the hemoglobin fluorescence. However, the mechanism of its fluorescence emission still remains unknown. In this work, we studied the two-photon excited fluorescence properties of the hemoglobin subunits, heme/hemin (iron (II)/(III) protoporphyrin IX) and globin. We first studied the properties of heme and the similar spectral and temporal characteristics of heme and hemoglobin fluorescence provide strong evidence that heme is the fluorophore in hemoglobin. Then we studied the fluorescence properties of hemin, globin and methemoglobin, and found that the hemin may have the main effect on the methemoglobin fluorescence and that globin has tryptophan fluorescence like other proteins. Finally, since heme is a centrosymmetric molecule, that the Soret band fluorescence of heme and hemoglobin was not observed in the single photon process in the previous study may be due to the parity selection rule. The discovery of heme two-photon excited fluorescence may open a new window for heme biology research, since heme as a cofactor of hemoprotein has many functions, including chemical catalysis, electron transfer and diatomic gases transportation.

  17. SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chibani, O; Price, R; Ma, C

    Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows thatmore » the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.« less

  18. Light transport feature for SCINFUL.

    PubMed

    Etaati, G R; Ghal-Eh, N

    2008-03-01

    An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.

  19. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.

  20. First ERO2.0 modeling of Be erosion and non-local transport in JET ITER-like wall

    NASA Astrophysics Data System (ADS)

    Romazanov, J.; Borodin, D.; Kirschner, A.; Brezinsek, S.; Silburn, S.; Huber, A.; Huber, V.; Bufferand, H.; Firdaouss, M.; Brömmel, D.; Steinbusch, B.; Gibbon, P.; Lasa, A.; Borodkina, I.; Eksaeva, A.; Linsmeier, Ch; Contributors, JET

    2017-12-01

    ERO is a Monte-Carlo code for modeling plasma-wall interaction and 3D plasma impurity transport for applications in fusion research. The code has undergone a significant upgrade (ERO2.0) which allows increasing the simulation volume in order to cover the entire plasma edge of a fusion device, allowing a more self-consistent treatment of impurity transport and comparison with a larger number and variety of experimental diagnostics. In this contribution, the physics-relevant technical innovations of the new code version are described and discussed. The new capabilities of the code are demonstrated by modeling of beryllium (Be) erosion of the main wall during JET limiter discharges. Results for erosion patterns along the limiter surfaces and global Be transport including incident particle distributions are presented. A novel synthetic diagnostic, which mimics experimental wide-angle 2D camera images, is presented and used for validating various aspects of the code, including erosion, magnetic shadowing, non-local impurity transport, and light emission simulation.

  1. Study of the impact of artificial articulations on the dose distribution under medical irradiation

    NASA Astrophysics Data System (ADS)

    Buffard, E.; Gschwind, R.; Makovicka, L.; Martin, E.; Meunier, C.; David, C.

    2005-02-01

    Perturbations due to the presence of high density heterogeneities in the body are not correctly taken into account in the Treatment Planning Systems currently available for external radiotherapy. For this reason, the accuracy of the dose distribution calculations has to be improved by using Monte Carlo simulations. In a previous study, we established a theoretical model by using the Monte Carlo code EGSnrc [I. Kawrakow, D.W.O. Rogers, The EGSnrc code system: MC simulation of electron and photon transport. Technical Report PIRS-701, NRCC, Ottawa, Canada, 2000] in order to obtain the dose distributions around simple heterogeneities. These simulations were then validated by experimental results obtained with thermoluminescent dosemeters and an ionisation chamber. The influence of samples composed of hip prostheses materials (titanium alloy and steel) and a substitute of bone were notably studied. A more complex model was then developed with the Monte Carlo code BEAMnrc [D.W.O. Rogers, C.M. MA, G.X. Ding, B. Walters, D. Sheikh-Bagheri, G.G. Zhang, BEAMnrc Users Manual. NRC Report PPIRS 509(a) rev F, 2001] in order to take into account the hip prosthesis geometry. The simulation results were compared to experimental measurements performed in a water phantom, in the case of a standard treatment of a pelvic cancer for one of the beams passing through the implant. These results have shown the great influence of the prostheses on the dose distribution.

  2. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  3. Photonic-assisted microwave signal multiplication and modulation using a silicon Mach–Zehnder modulator

    PubMed Central

    Long, Yun; Zhou, Linjie; Wang, Jian

    2016-01-01

    Photonic generation of microwave signal is obviously attractive for many prominent advantages, such as large bandwidth, low loss, and immunity to electromagnetic interference. Based on a single integrated silicon Mach–Zehnder modulator (MZM), we propose and experimentally demonstrate a simple and compact photonic scheme to enable frequency-multiplicated microwave signal. Using the fabricated integrated MZM, we also demonstrate the feasibility of microwave amplitude-shift keying (ASK) modulation based on integrated photonic approach. In proof-of-concept experiments, 2-GHz frequency-doubled microwave signal is generated using a 1-GHz driving signal. 750-MHz/1-GHz frequency-tripled/quadrupled microwave signals are obtained with a driving signal of 250 MHz. In addition, a 50-Mb/s binary amplitude coded 1-GHz microwave signal is also successfully generated. PMID:26832305

  4. Fundamentals of Free-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Moision, Bruce; Erkmen, Baris

    2012-01-01

    Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.

  5. Monte Carlo studies on photon interactions in radiobiological experiments

    PubMed Central

    Shahmohammadi Beni, Mehrdad; Krstic, D.; Nikezic, D.

    2018-01-01

    X-ray and γ-ray photons have been widely used for studying radiobiological effects of ionizing radiations. Photons are indirectly ionizing radiations so they need to set in motion electrons (which are a directly ionizing radiation) to perform the ionizations. When the photon dose decreases to below a certain limit, the number of electrons set in motion will become so small that not all cells in an “exposed” cell population can get at least one electron hit. When some cells in a cell population are not hit by a directly ionizing radiation (in other words not irradiated), there will be rescue effect between the irradiated cells and non-irradiated cells, and the resultant radiobiological effect observed for the “exposed” cell population will be different. In the present paper, the mechanisms underlying photon interactions in radiobiological experiments were studied using our developed NRUphoton computer code, which was benchmarked against the MCNP5 code by comparing the photon dose delivered to the cell layer underneath the water medium. The following conclusions were reached: (1) The interaction fractions decreased in the following order: 16O > 12C > 14N > 1H. Bulges in the interaction fractions (versus water medium thickness) were observed, which reflected changes in the energies of the propagating photons due to traversals of different amount of water medium as well as changes in the energy-dependent photon interaction cross-sections. (2) Photoelectric interaction and incoherent scattering dominated for lower-energy (10 keV) and high-energy (100 keV and 1 MeV) incident photons. (3) The fractions of electron ejection from different nuclei were mainly governed by the photoelectric effect cross-sections, and the fractions from the 1s subshell were the largest. (4) The penetration fractions in general decreased with increasing medium thickness, and increased with increasing incident photon energy, the latter being explained by the corresponding reduction in interaction cross-sections. (5) The areas under the angular distribution curves of photons exiting the medium layer and subsequently undergoing interactions within the cell layer became smaller for larger incident photon energies. (6) The number of cells suffering at least one electron hit increased with the administered dose. For larger incident photon energies, the numbers of cells suffering at least one electron hit became smaller, which was attributed to the reduction in the photon interaction cross-section. These results highlighted the importance of the administered dose in radiobiological experiments. In particular, the threshold administered doses at which all cells in the exposed cell array suffered at least one electron hit might provide hints on explaining the intriguing observation that radiation-induced cancers can be statistically detected only above the threshold value of ~100 mSv, and thus on reconciling controversies over the linear no-threshold model. PMID:29561871

  6. IPOLE - semi-analytic scheme for relativistic polarized radiative transport

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, M.; Gammie, C. F.

    2018-03-01

    We describe IPOLE, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the IBOTHROS scheme for covariant, unpolarized transport using two representations of the polarized radiation field: In the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, IPOLE is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's GRTRANS polarized transport code on a complicated model problem. We expect IPOLE will mainly find applications in modelling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modelling for the IXPE mission.

  7. SOPHAEROS code development and its application to falcon tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lajtha, G.; Missirlian, M.; Kissane, M.

    1996-12-31

    One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less

  8. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    NASA Astrophysics Data System (ADS)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  9. Experimental realization of the analogy of quantum dense coding in classical optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhenwei; Sun, Yifan; Li, Pengyun

    2016-06-15

    We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less

  10. Intact coding region of the serotonin transporter gene in obsessive-compulsive disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altemus, M.; Murphy, D.L.; Greenberg, B.

    1996-07-26

    Epidemiologic studies indicate that obsessive-compulsive disorder is genetically transmitted in some families, although no genetic abnormalities have been identified in individuals with this disorder. The selective response of obsessive-compulsive disorder to treatment with agents which block serotonin reuptake suggests the gene coding for the serotonin transporter as a candidate gene. The primary structure of the serotonin-transporter coding region was sequenced in 22 patients with obsessive-compulsive disorder, using direct PCR sequencing of cDNA synthesized from platelet serotonin-transporter mRNA. No variations in amino acid sequence were found among the obsessive-compulsive disorder patients or healthy controls. These results do not support a rolemore » for alteration in the primary structure of the coding region of the serotonin-transporter gene in the pathogenesis of obsessive-compulsive disorder. 27 refs.« less

  11. Evolution of Structure and Composition in Saturn's Rings Due to Ballistic Transport of Micrometeoroid Impact Ejecta

    NASA Astrophysics Data System (ADS)

    Estrada, P. R.; Durisen, R. H.; Cuzzi, J. N.

    2014-04-01

    We introduce improved numerical techniques for simulating the structural and compositional evolution of planetary rings due to micrometeoroid bombardment and subsequent ballistic transport of impact ejecta. Our current, robust code, which is based on the original structural code of [1] and on the pollution transport code of [3], is capable of modeling structural changes and pollution transport simultaneously over long times on both local and global scales. We provide demonstrative simulations to compare with, and extend upon previous work, as well as examples of how ballistic transport can maintain the observed structure in Saturn's rings using available Cassini occultation optical depth data.

  12. Fast Tract to Currency through Curriculum Morphing.

    ERIC Educational Resources Information Center

    Guenther, A. H.; Hull, Darrell

    This document addresses the challenges and opportunities technological advances are presenting in education. Using the example of Photonics (defined as the generation, manipulation, transport, detection and use of light information and energy whose quantium unit is the photon) the report illustrates the need for up-to-date educational curricula.…

  13. The photon gas formulation of thermal radiation

    NASA Technical Reports Server (NTRS)

    Ried, R. C., Jr.

    1975-01-01

    A statistical consideration of the energy, the linear momentum, and the angular momentum of the photons that make up a thermal radiation field was presented. A general nonequilibrium statistical thermodynamics approach toward a macroscopic description of thermal radiation transport was developed and then applied to the restricted equilibrium statistical thermostatics derivation of the energy, linear momentum, and intrinsic angular momentum equations for an isotropic photon gas. A brief treatment of a nonisotropic photon gas, as an example of the results produced by the nonequilibrium statistical thermodynamics approach, was given. The relativistic variation of temperature and the invariance of entropy were illustrated.

  14. When quantum optics meets topology

    NASA Astrophysics Data System (ADS)

    Amo, Alberto

    2018-02-01

    Routing photons at the micrometer scale remains one of the greatest challenges of integrated quantum optics. The main difficulty is the scattering losses at bends and splitters in the photonic circuit. Current approaches imply elaborate designs, quite sensitive to fabrication details (1). Inspired by the physics underlying the one-way transport of electrons in topological insulators, on page 666 of this issue, Barik et al. (2) report a topological photonic crystal in which single photons are emitted and routed through bends with negligible loss. The marriage between quantum optics and topology promises new opportunities for compact quantum optics gating and manipulation.

  15. Dynamic characterization of hydrophobic and hydrophilic solutes in oleic-acid enhanced transdermal delivery using two-photon fluorescence microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, Te-Yu; Yang, Chiu-Sheng; Chen, Yang-Fang

    In this letter, we propose an efficient methodology of investigating dynamic properties of sulforhodamine B and rhodamine B hexyl ester molecules transporting across ex-vivo human stratum corneum with and without oleic acid enhancement. Three-dimensional, time-lapse fluorescence images of the stratum corneum can be obtained using two-photon fluorescence microscopy. Furthermore, temporal quantifications of transport enhancements in diffusion parameters can be achieved with the use of Fick's second law. Dynamic characterization of solutes transporting across the stratum corneum is an effective method for understanding transient phenomena in transdermal delivery of probe molecules, leading to improved delivery strategies of molecular species for therapeuticmore » purposes.« less

  16. The EPQ Code System for Simulating the Thermal Response of Plasma-Facing Components to High-Energy Electron Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Robert Cameron; Steiner, Don

    2004-06-15

    The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less

  17. Comparing Turbulence Simulation with Experiment in DIII-D

    NASA Astrophysics Data System (ADS)

    Ross, D. W.; Bravenec, R. V.; Dorland, W.; Beer, M. A.; Hammett, G. W.; McKee, G. R.; Murakami, M.; Jackson, G. L.

    2000-10-01

    Gyrofluid simulations of DIII-D discharges with the GRYFFIN code(D. W. Ross et al.), Transport Task Force Workshop, Burlington, VT, (2000). are compared with transport and fluctuation measurements. The evolution of confinement-improved discharges(G. R. McKee et al.), Phys. Plasmas 7, 1870 (200) is studied at early times following impurity injection, when EXB rotational shear plays a small role. The ion thermal transport predicted by the code is consistent with the experimental values. Experimentally, changes in density profiles resulting from the injection of neon, lead to reduction in fluctuation levels and transport following the injection. This triggers subsequent changes in the shearing rate that further reduce the turbulence.(M. Murakami et al.), European Physical Society, Budapest (2000); M. Murakami et al., this meeting. Estimated uncertainties in the plasma profiles, however, make it difficult to simulate these reductions with the code. These cases will also be studied with the GS2 gyrokinetic code.

  18. Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chame, Jacqueline

    2011-05-27

    The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and formore » the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.« less

  19. Overview of Recent Radiation Transport Code Comparisons for Space Applications

    NASA Astrophysics Data System (ADS)

    Townsend, Lawrence

    Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.

  20. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN.

    PubMed

    Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E

    2013-10-21

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  1. A comparative study of space radiation organ doses and associated cancer risks using PHITS and HZETRN

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.

    2013-10-01

    NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.

  2. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  3. Analysis of dpa rates in the HFIR reactor vessel using a hybrid Monte Carlo/deterministic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward

    2016-01-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiationmore » transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 below to approximately 12 above the axial extent of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.« less

  4. X-Ray Spectra from MHD Simulations of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy D.; Krolik, Julian H.; Noble, Scott C.

    2012-01-01

    We present the results of a new global radiation transport code coupled to a general relativistic magneto-hydrodynamic simulation of an accreting, nonrotating black hole. For the first time, we are able to explain from first principles in a self-consistent way the X-ray spectra observed from stellar-mass black holes, including a thermal peak, Compton reflection hump, power-law tail, and broad iron line. Varying only the mass accretion rate, we are able to reproduce the low/hard, steep power-law, and thermal-dominant states seen in most galactic black hole sources. The temperature in the corona is T(sub e) 10 keV in a boundary layer near the disk and rises smoothly to T(sub e) greater than or approximately 100 keV in low-density regions far above the disk. Even as the disk's reflection edge varies from the horizon out to approximately equal to 6M as the accretion rate decreases, we find that the shape of the Fe Ka line is remarkably constant. This is because photons emitted from the plunging region are strongly beamed into the horizon and never reach the observer. We have also carried out a basic timing analysis of the spectra and find that the fractional variability increases with photon energy and viewer inclination angle, consistent with the coronal hot spot model for X-ray fluctuations.

  5. MILSTAMP TACs: Military Standard Transportation and Movement Procedures Transportation Account Codes. Volume 2

    DTIC Science & Technology

    1987-02-15

    this chapter. NO - If shipment is not second des - tination transportation , obtain fund cite per yes response for question 2 above. 4. For Direct Support...return . . . . . . . . .0 . . . . . . . a. . .. A820 (8) LOGAIR/QUICKTRANS. Transportation Account Codes de - signed herein are applicable to the...oo~• na~- Transportation Tis Document Contains Tasotto Missing Page/s That Are Unavailable In The And Original Document Movement sdocument has boon

  6. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  7. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  8. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  9. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    NASA Astrophysics Data System (ADS)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.

  10. SEURAT: SPH scheme extended with ultraviolet line radiative transfer

    NASA Astrophysics Data System (ADS)

    Abe, Makito; Suzuki, Hiroyuki; Hasegawa, Kenji; Semelin, Benoit; Yajima, Hidenobu; Umemura, Masayuki

    2018-05-01

    We present a novel Lyman alpha (Ly α) radiative transfer code, SEURAT (SPH scheme Extended with Ultraviolet line RAdiative Transfer), where line scatterings are solved adaptively with the resolution of the smoothed particle hydrodynamics (SPH). The radiative transfer method implemented in SEURAT is based on a Monte Carlo algorithm in which the scattering and absorption by dust are also incorporated. We perform standard test calculations to verify the validity of the code; (i) emergent spectra from a static uniform sphere, (ii) emergent spectra from an expanding uniform sphere, and (iii) escape fraction from a dusty slab. Thereby, we demonstrate that our code solves the {Ly} α radiative transfer with sufficient accuracy. We emphasize that SEURAT can treat the transfer of {Ly} α photons even in highly complex systems that have significantly inhomogeneous density fields. The high adaptivity of SEURAT is desirable to solve the propagation of {Ly} α photons in the interstellar medium of young star-forming galaxies like {Ly} α emitters (LAEs). Thus, SEURAT provides a powerful tool to model the emergent spectra of {Ly} α emission, which can be compared to the observations of LAEs.

  11. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  12. Entanglement evaluation of non-Gaussian states generated by photon subtraction from squeezed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide

    2006-04-15

    We consider the problem of evaluating the entanglement of non-Gaussian mixed states generated by photon subtraction from entangled squeezed states. The entanglement measures we use are the negativity and the logarithmic negativity. These measures possess the unusual property of being computable with linear algebra packages even for high-dimensional quantum systems. We numerically evaluate these measures for the non-Gaussian mixed states which are generated by photon subtraction with on/off photon detectors. The results are compared with the behavior of certain operational measures, namely the teleportation fidelity and the mutual information in the dense coding scheme. It is found that all ofmore » these results are mutually consistent, in the sense that whenever the enhancement is seen in terms of the operational measures, the negativity and the logarithmic negativity are also enhanced.« less

  13. Radiation Transport Tools for Space Applications: A Review

    NASA Technical Reports Server (NTRS)

    Jun, Insoo; Evans, Robin; Cherng, Michael; Kang, Shawn

    2008-01-01

    This slide presentation contains a brief discussion of nuclear transport codes widely used in the space radiation community for shielding and scientific analyses. Seven radiation transport codes that are addressed. The two general methods (i.e., Monte Carlo Method, and the Deterministic Method) are briefly reviewed.

  14. Fluctuations of tunneling currents in photonic and polaritonic systems

    NASA Astrophysics Data System (ADS)

    Mantsevich, V. N.; Glazov, M. M.

    2018-04-01

    Here we develop the nonequilibrium Green's function formalism to analyze the fluctuation spectra of the boson tunneling currents. The approach allows us to calculate the noise spectra in both equilibrium and nonequilibrium conditions. The proposed general formalism is applied to several important realizations of boson transport, including the tunneling transport between two reservoirs and the case where the boson current flows through the intermediate region between the reservoirs. Developed theory can be applied for the analysis of the current noise in waveguides, coupled optical resonators, quantum microcavities, etc., where the tunneling of photons, exciton-polaritons, or excitons can be realized.

  15. Comparison of Transport Codes, HZETRN, HETC and FLUKA, Using 1977 GCR Solar Minimum Spectra

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Slaba, Tony C.; Tripathi, Ram K.; Blattnig, Steve R.; Norbury, John W.; Badavi, Francis F.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; hide

    2009-01-01

    The HZETRN deterministic radiation transport code is one of several tools developed to analyze the effects of harmful galactic cosmic rays (GCR) and solar particle events (SPE) on mission planning, astronaut shielding and instrumentation. This paper is a comparison study involving the two Monte Carlo transport codes, HETC-HEDS and FLUKA, and the deterministic transport code, HZETRN. Each code is used to transport ions from the 1977 solar minimum GCR spectrum impinging upon a 20 g/cm2 Aluminum slab followed by a 30 g/cm2 water slab. This research is part of a systematic effort of verification and validation to quantify the accuracy of HZETRN and determine areas where it can be improved. Comparisons of dose and dose equivalent values at various depths in the water slab are presented in this report. This is followed by a comparison of the proton fluxes, and the forward, backward and total neutron fluxes at various depths in the water slab. Comparisons of the secondary light ion 2H, 3H, 3He and 4He fluxes are also examined.

  16. 3D Vectorial Time Domain Computational Integrated Photonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Bond, T C; Koning, J M

    2007-02-16

    The design of integrated photonic structures poses considerable challenges. 3D-Time-Domain design tools are fundamental in enabling technologies such as all-optical logic, photonic bandgap sensors, THz imaging, and fast radiation diagnostics. Such technologies are essential to LLNL and WFO sponsors for a broad range of applications: encryption for communications and surveillance sensors (NSA, NAI and IDIV/PAT); high density optical interconnects for high-performance computing (ASCI); high-bandwidth instrumentation for NIF diagnostics; micro-sensor development for weapon miniaturization within the Stockpile Stewardship and DNT programs; and applications within HSO for CBNP detection devices. While there exist a number of photonics simulation tools on the market,more » they primarily model devices of interest to the communications industry. We saw the need to extend our previous software to match the Laboratory's unique emerging needs. These include modeling novel material effects (such as those of radiation induced carrier concentrations on refractive index) and device configurations (RadTracker bulk optics with radiation induced details, Optical Logic edge emitting lasers with lateral optical inputs). In addition we foresaw significant advantages to expanding our own internal simulation codes: parallel supercomputing could be incorporated from the start, and the simulation source code would be accessible for modification and extension. This work addressed Engineering's Simulation Technology Focus Area, specifically photonics. Problems addressed from the Engineering roadmap of the time included modeling the Auston switch (an important THz source/receiver), modeling Vertical Cavity Surface Emitting Lasers (VCSELs, which had been envisioned as part of fast radiation sensors), and multi-scale modeling of optical systems (for a variety of applications). We proposed to develop novel techniques to numerically solve the 3D multi-scale propagation problem for both the microchip laser logic devices as well as devices characterized by electromagnetic (EM) propagation in nonlinear materials with time-varying parameters. The deliverables for this project were extended versions of the laser logic device code Quench2D and the EM propagation code EMsolve with new modules containing the novel solutions incorporated by taking advantage of the existing software interface and structured computational modules. Our approach was multi-faceted since no single methodology can always satisfy the tradeoff between model runtime and accuracy requirements. We divided the problems to be solved into two main categories: those that required Full Wave Methods and those that could be modeled using Approximate Methods. Full Wave techniques are useful in situations where Maxwell's equations are not separable (or the problem is small in space and time), while approximate techniques can treat many of the remaining cases.« less

  17. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  18. Light Control and Image Transmission Through Photonic Lattices with Engineered Coupling

    DTIC Science & Technology

    2015-05-05

    HOLLOWAY AVE BUILDING NAD ROOM 358C SAN FRANCISCO, CA 941321722 US 8.  PERFORMING ORGANIZATION      REPORT NUMBER 9.  SPONSORING/MONITORING AGENCY NAME(S...include mainly beam control in engineered photonic lattices, Tamm and Shockley-like edge states and topological surface states in 2D honey- comb lattices...like edge states and topological surface states in 2D honey- comb lattices (“photonic graphene”), and light localization and transport in disordered

  19. Topological Valley Transport in Two-dimensional Honeycomb Photonic Crystals.

    PubMed

    Yang, Yuting; Jiang, Hua; Hang, Zhi Hong

    2018-01-25

    Two-dimensional photonic crystals, in analogy to AB/BA stacking bilayer graphene in electronic system, are studied. Inequivalent valleys in the momentum space for photons can be manipulated by simply engineering diameters of cylinders in a honeycomb lattice. The inequivalent valleys in photonic crystal are selectively excited by a designed optical chiral source and bulk valley polarizations are visualized. Unidirectional valley interface states are proved to exist on a domain wall connecting two photonic crystals with different valley Chern numbers. With the similar optical vortex index, interface states can couple with bulk valley polarizations and thus valley filter and valley coupler can be designed. Our simple dielectric PC scheme can help to exploit the valley degree of freedom for future optical devices.

  20. Photonic Bandgaps in Photonic Molecules

    NASA Technical Reports Server (NTRS)

    Smith, David D.; Chang, Hongrok; Gates, Amanda L.; Fuller, Kirk A.; Gregory, Don A.; Witherow, William K.; Paley, Mark S.; Frazier, Donald O.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    This talk will focus on photonic bandgaps that arise due to nearly free photon and tight-binding effects in coupled microparticle and ring-resonator systems. The Mie formulation for homogeneous spheres is generalized to handle core/shell systems and multiple concentric layers in a manner that exploits an analogy with stratified planar systems, thereby allowing concentric multi-layered structures to be treated as photonic bandgap (PBG) materials. Representative results from a Mie code employing this analogy demonstrate that photonic bands arising from nearly free photon effects are easily observed in the backscattering, asymmetry parameter, and albedo for periodic quarter-wave concentric layers, though are not readily apparent in extinction spectra. Rather, the periodicity simply alters the scattering profile, enhancing the ratio of backscattering to forward scattering inside the bandgap, in direct analogy with planar quarter-wave multilayers. PBGs arising from tight-binding may also be observed when the layers (or rings) are designed such that the coupling between them is weak. We demonstrate that for a structure consisting of N coupled micro-resonators, the morphology dependent resonances split into N higher-Q modes, in direct analogy with other types of oscillators, and that this splitting ultimately results in PBGs which can lead to enhanced nonlinear optical effects.

  1. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  2. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  3. Moving from Batch to Field Using the RT3D Reactive Transport Modeling System

    NASA Astrophysics Data System (ADS)

    Clement, T. P.; Gautam, T. R.

    2002-12-01

    The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.

  4. Time-bin entangled photons from a quantum dot

    PubMed Central

    Jayakumar, Harishankar; Predojević, Ana; Kauten, Thomas; Huber, Tobias; Solomon, Glenn S.; Weihs, Gregor

    2014-01-01

    Long distance quantum communication is one of the prime goals in the field of quantum information science. With information encoded in the quantum state of photons, existing telecommunication fibre networks can be effectively used as a transport medium. To achieve this goal, a source of robust entangled single photon pairs is required. Here, we report the realization of a source of time-bin entangled photon pairs utilizing the biexciton-exciton cascade in a III/V self-assembled quantum dot. We analyse the generated photon pairs by an inherently phase-stable interferometry technique, facilitating uninterrupted long integration times. We confirm the entanglement by performing quantum state tomography of the emitted photons, which yields a fidelity of 0.69(3) and a concurrence of 0.41(6) for our realization of time-energy entanglement from a single quantum emitter. PMID:24968024

  5. Time-bin entangled photons from a quantum dot.

    PubMed

    Jayakumar, Harishankar; Predojević, Ana; Kauten, Thomas; Huber, Tobias; Solomon, Glenn S; Weihs, Gregor

    2014-06-26

    Long-distance quantum communication is one of the prime goals in the field of quantum information science. With information encoded in the quantum state of photons, existing telecommunication fibre networks can be effectively used as a transport medium. To achieve this goal, a source of robust entangled single-photon pairs is required. Here we report the realization of a source of time-bin entangled photon pairs utilizing the biexciton-exciton cascade in a III/V self-assembled quantum dot. We analyse the generated photon pairs by an inherently phase-stable interferometry technique, facilitating uninterrupted long integration times. We confirm the entanglement by performing quantum state tomography of the emitted photons, which yields a fidelity of 0.69(3) and a concurrence of 0.41(6) for our realization of time-energy entanglement from a single quantum emitter.

  6. Remote entanglement between a single atom and a Bose-Einstein condensate.

    PubMed

    Lettner, M; Mücke, M; Riedl, S; Vo, C; Hahn, C; Baur, S; Bochmann, J; Ritter, S; Dürr, S; Rempe, G

    2011-05-27

    Entanglement between stationary systems at remote locations is a key resource for quantum networks. We report on the experimental generation of remote entanglement between a single atom inside an optical cavity and a Bose-Einstein condensate (BEC). To produce this, a single photon is created in the atom-cavity system, thereby generating atom-photon entanglement. The photon is transported to the BEC and converted into a collective excitation in the BEC, thus establishing matter-matter entanglement. After a variable delay, this entanglement is converted into photon-photon entanglement. The matter-matter entanglement lifetime of 100 μs exceeds the photon duration by 2 orders of magnitude. The total fidelity of all concatenated operations is 95%. This hybrid system opens up promising perspectives in the field of quantum information. © 2011 American Physical Society

  7. Surface modification by metal ion implantation forming metallic nanoparticles in an insulating matrix

    NASA Astrophysics Data System (ADS)

    Salvadori, M. C.; Teixeira, F. S.; Sgubin, L. G.; Cattani, M.; Brown, I. G.

    2014-08-01

    There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We have investigated nanocomposites produced by metal ion implantation into insulating substrates, where the implanted metal self-assembles into nanoparticles. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), which can be estimated by computer simulation using the TRIDYN code. TRIDYN is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study show that the nanoparticles form a bidimentional array buried a few nanometers below the substrate surface. We have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples show that metallic nanoparticles form in the insulating matrix. These nanocomposites have been characterized by measuring the resistivity of the composite layer as a function of the implantation dose. The experimental results are compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement is found between the experimental results and the predictions of the theory. We conclude in that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.

  8. Generic reactive transport codes as flexible tools to integrate soil organic matter degradation models with water, transport and geochemistry in soils

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik; Gérard, Fréderic; Mayer, Uli; Simunek, Jirka; Leterme, Bertrand

    2016-04-01

    A large number of organic matter degradation, CO2 transport and dissolved organic matter models have been developed during the last decades. However, organic matter degradation models are in many cases strictly hard-coded in terms of organic pools, degradation kinetics and dependency on environmental variables. The scientific input of the model user is typically limited to the adjustment of input parameters. In addition, the coupling with geochemical soil processes including aqueous speciation, pH-dependent sorption and colloid-facilitated transport are not incorporated in many of these models, strongly limiting the scope of their application. Furthermore, the most comprehensive organic matter degradation models are combined with simplified representations of flow and transport processes in the soil system. We illustrate the capability of generic reactive transport codes to overcome these shortcomings. The formulations of reactive transport codes include a physics-based continuum representation of flow and transport processes, while biogeochemical reactions can be described as equilibrium processes constrained by thermodynamic principles and/or kinetic reaction networks. The flexibility of these type of codes allows for straight-forward extension of reaction networks, permits the inclusion of new model components (e.g.: organic matter pools, rate equations, parameter dependency on environmental conditions) and in such a way facilitates an application-tailored implementation of organic matter degradation models and related processes. A numerical benchmark involving two reactive transport codes (HPx and MIN3P) demonstrates how the process-based simulation of transient variably saturated water flow (Richards equation), solute transport (advection-dispersion equation), heat transfer and diffusion in the gas phase can be combined with a flexible implementation of a soil organic matter degradation model. The benchmark includes the production of leachable organic matter and inorganic carbon in the aqueous and gaseous phases, as well as different decomposition functions with first-order, linear dependence or nonlinear dependence on a biomass pool. In addition, we show how processes such as local bioturbation (bio-diffusion) can be included implicitly through a Fickian formulation of transport of soil organic matter. Coupling soil organic matter models with generic and flexible reactive transport codes offers a valuable tool to enhance insights into coupled physico-chemical processes at different scales within the scope of C-biogeochemical cycles, possibly linked with other chemical elements such as plant nutrients and pollutants.

  9. U.S. Commercial Spent Nuclear Fuel Assembly Characteristics - 1968-2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Peterson, Joshua L.; Gauld, Ian C.

    2016-09-01

    Activities related to management of spent nuclear fuel (SNF) are increasing in the US and many other countries. Over 240,000 SNF assemblies have been discharged from US commercial reactors since the late 1960s. The enrichment and burnup of SNF have changed significantly over the past 40 years, and fuel assembly designs have also evolved. Understanding the general characteristics of SNF helps regulators and other stakeholders form overall strategies towards the final disposal of US SNF. This report documents a survey of all US commercial SNF assemblies in the GC-859 database and provides reference SNF source terms (e.g., nuclide inventories, decaymore » heat, and neutron/photon emission) at various cooling times up to 200 years after fuel discharge. This study reviews the distribution and evolution of fuel parameters of all SNF assemblies discharged over the past 40 years. Assemblies were categorized into three groups based on discharge year, and the median burnups and enrichments of each group were used to establish representative cases. An extended burnup case was created for boiling water reactor (BWR) fuels, and another was created for the pressurized water reactor (PWR) fuels. Two additional cases were developed to represent the eight mixed oxide (MOX) fuel assemblies in the database. Burnup calculations were performed for each representative case. Realistic parameters for fuel design and operations were used to model the SNF and to provide reference fuel characteristics representative of the current inventory. Burnup calculations were performed using the ORIGEN code, which is part of the SCALE nuclear modeling and simulation code system. Results include total activity, decay heat, photon emission, neutron flux, gamma heat, and plutonium content, as well as concentrations for 115 significant nuclides. These quantities are important in the design, regulation, and operations of SNF storage, transportation, and disposal systems.« less

  10. Absorbed dose evaluation of Auger electron-emitting radionuclides: impact of input decay spectra on dose point kernels and S-values

    NASA Astrophysics Data System (ADS)

    Falzone, Nadia; Lee, Boon Q.; Fernández-Varea, José M.; Kartsonaki, Christiana; Stuchbery, Andrew E.; Kibédi, Tibor; Vallis, Katherine A.

    2017-03-01

    The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67Ga, 80mBr, 89Zr, 90Nb, 99mTc, 111In, 117mSn, 119Sb, 123I, 124I, 125I, 135La, 195mPt and 201Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.

  11. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

    NASA Astrophysics Data System (ADS)

    Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

    2014-03-01

    Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

  12. 49 CFR 171.25 - Additional requirements for the use of the IMDG Code.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Packages containing primary lithium batteries and cells that are transported in accordance with Special Provision 188 of the IMDG Code must be marked “PRIMARY LITHIUM BATTERIES—FORBIDDEN FOR TRANSPORT ABOARD PASSENGER AIRCRAFT” or “LITHIUM METAL BATTERIES—FORBIDDEN FOR TRANSPORT ABOARD PASSENGER AIRCRAFT.” This...

  13. 77 FR 18716 - Transportation Security Administration Postal Zip Code Change; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    ... organizational changes and it has no substantive effect on the public. DATES: Effective March 28, 2012. FOR... No. 1572-9] Transportation Security Administration Postal Zip Code Change; Technical Amendment AGENCY: Transportation Security Administration, DHS. ACTION: Final rule. SUMMARY: This rule is a technical change to...

  14. Mode-Selective Amplification in a Large Mode Area Yb-Doped Fiber Using a Photonic Lantern

    DTIC Science & Technology

    2016-05-15

    in a few mode, double- clad Yb-doped large mode area (LMA) fiber, utilizing an all-fiber photonic lantern. Amplification to multi-watt output power is...that could enable dynamic spatial mode control in high power fiber lasers . © 2016 Optical Society of America OCIS codes: (060.2320) Fiber optics...amplifiers and oscillators; (060.2340) Fiber optics components. http://dx.doi.org/10.1364/OL.41.002157 The impressive growth experienced by fiber lasers and

  15. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  16. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  17. Shielding properties of the ordinary concrete loaded with micro- and nano-particles against neutron and gamma radiations.

    PubMed

    Mesbahi, Asghar; Ghiasi, Hosein

    2018-06-01

    The shielding properties of ordinary concrete doped with some micro and nano scaled materials were studied in the current study. Narrow beam geometry was simulated using MCNPX Monte Carlo code and the mass attenuation coefficient of ordinary concrete doped with PbO 2 , Fe 2 O 3 , WO 3 and H 4 B (Boronium) in both nano and micro scales was calculated for photon and neutron beams. Mono-energetic beams of neutrons (100-3000 keV) and photons (142-1250 keV) were used for calculations. The concrete doped with nano-sized particles showed higher neutron removal cross section (7%) and photon attenuation coefficient (8%) relative to micro-particles. Application of nano-sized material in the composition of new concretes for dual protection against neutrons and photons are recommended. For further studies, the calculation of attenuation coefficients of these nano-concretes against higher energies of neutrons and photons and different particles are suggested. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Simulation of photon attenuation coefficients for high effective shielding material Lead-Boron Polyethyene

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Jia, M. C.; Gong, J. J.; Xia, W. M.

    2017-12-01

    The mass attenuation coefficient of various Lead-Boron Polyethylene samples which can be used as the photon shielding materials in marine reactor, have been simulated using the MCNP-5 code, and compared with the theoretical values at the photon energy range 0.001MeV—20MeV. A good agreement has been observed. The variations of mass attenuation coefficient, linear attenuation coefficient and mean free path with photon energy between 0.001MeV to 100MeV have been plotted. The result shows that all the coefficients strongly depends on the photon energy, material atomic composition and density. The dose transmission factors for source Cesium-137 and Cobalt-60 have been worked out and their variations with the thickness of various sample materials have also been plotted. The variations show that with the increase of materials thickness the dose transmission factors decrease continuously. The results of this paper can provide some reference for the use of the high effective shielding material Lead-Boron Polyethyene.

  19. Photons in dense nuclear matter: Random-phase approximation

    NASA Astrophysics Data System (ADS)

    Stetina, Stephan; Rrapaj, Ermal; Reddy, Sanjay

    2018-04-01

    We present a comprehensive and pedagogic discussion of the properties of photons in cold and dense nuclear matter based on the resummed one-loop photon self-energy. Correlations among electrons, muons, protons, and neutrons in β equilibrium that arise as a result of electromagnetic and strong interactions are consistently taken into account within the random phase approximation. Screening effects, damping, and collective excitations are systematically studied in a fully relativistic setup. Our study is relevant to the linear response theory of dense nuclear matter, calculations of transport properties of cold dense matter, and investigations of the production and propagation of hypothetical vector bosons such as the dark photons.

  20. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    PubMed

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

Top