For comprehensive and current results, perform a real-time search at Science.gov.

1

Object description and performance of Monte Carlo simulation in photon transport

Object description is important for performing photon transport efficiently by means of a Monte Carlo method. The description methods include a voxel-based description which represents an object by means of the union of a voxel of the same size and an octree description which describes an object by using cubic regions of several sizes. The octree representation requires fewer regions

T. Sato; Koichi Ogawa

1999-01-01

2

PERFORMANCE MEASUREMENT OF MONTE CARLO PHOTON TRANSPORT ON PARALLEL MACHINES

particle transport is an inherently parallel (or embarrassingly parallel) computational method that has there is also interest to explore multithreaded architectures to improve parallel performance of scientific

Majumdar, Amit

3

A GPU implementation of EGSnrc's Monte Carlo photon transport for imaging applications

NASA Astrophysics Data System (ADS)

EGSnrc is a well-known Monte Carlo simulation package for coupled electron-photon transport that is widely used in medical physics application. This paper proposes a parallel implementation of the photon transport mechanism of EGSnrc for graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). The implementation is specifically designed for imaging applications in the diagnostic energy range and does not model electrons. No approximations or simplifications of the original EGSnrc code were made other than using single floating-point precision instead of double precision and a different random number generator. To avoid performance penalties due to the random nature of the Monte Carlo method, the simulation was divided into smaller steps that could easily be performed in a parallel fashion suitable for GPUs. Speedups of 20 to 40 times for 643 to 2563 voxels were observed while the accuracy of the simulation was preserved. A detailed analysis of the differences between the CUDA simulation and the original EGSnrc was conducted. The two simulations were found to produce equivalent results for scattered photons and an overall systematic deviation of less than 0.08% was observed for primary photons.

Lippuner, Jonas; Elbakri, Idris A.

2011-11-01

4

TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

Cullen, D.E.

1997-11-22

5

ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2008-04-01

6

Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ''Carlos Haya'', Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

2010-07-15

7

Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)

NASA Astrophysics Data System (ADS)

Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.

Kum, Oyeon

2004-11-01

8

Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

2009-11-15

9

ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2004-06-01

10

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-01-01

11

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs.

Whalen, D.J. (Brigham Young Univ., Provo, UT (USA)); Hollowell, D.E.; Hendricks, J.S. (Los Alamos National Lab., NM (USA))

1990-01-01

12

The Forschungszentrum Karlsruhe operates a partial body counter, which is designed for the in vivo measurement of low-energy photon emitters in the human body. Recently, a numerical procedure has been developed which allows for the calculation of individual calibration factors for this partial body counter. The procedure is based on a Monte Carlo simulation of the radiation transport from the contaminated organ or tissue within the body to the detectors using the MCNP5 code. For simulation of the human body, the MEET Man dataset of the Institute of Biomedical Techniques of the University Karlsruhe has been applied. The derived calibration factors were compared with the respective values measured using some physical phantoms such as the Lawrence Livermore National Laboratory torso phantom and the bone phantoms of the New York University Medical Center and the US Transuranium and Uranium Registry. PMID:17261536

Doerfel, H; Heide, B

2007-01-01

13

An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)

Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)] [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)

2013-07-01

14

The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.

Morgan C. White

2000-07-01

15

NASA Astrophysics Data System (ADS)

In transportation, photonics helps save lives and improves safety. A vehicle - whether it is drawn, floated or driven - carries stringent physical limitations. Any photonic hardware will increase the vehicle's weight, space and power consumption. But this hardware will improve safety for drivers and passengers. Photonics also helps to inspect railroads and highways. Lasers, smart imagers and computers for data processing offer a range of capabilities for transportation industry. The proposed paper will discuss the state of the art in the area of photonic applications for transportation.

Inozemtsev, Vladimir G.; Shilin, Victor A.; Syster, Vladimir G.

2002-04-01

16

Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

2014-01-01

17

Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.

Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.

1996-04-01

18

The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.

Kirk, B.L.

1985-12-01

19

THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

2007-01-10

20

Improved geometry representations for Monte Carlo radiation transport.

ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

Martin, Matthew Ryan (Cornell University)

2004-08-01

21

Treatment of Compton scattering of linearly polarized photons in Monte Carlo codes

NASA Astrophysics Data System (ADS)

The basic formalism of Compton scattering of linearly polarized photons is reviewed, and some simple prescriptions to deal with the transport of polarized photons in Monte Carlo simulation codes are given. Fortran routines, based on the described method, have been included in MCNP, a widely used code for neutrons, photons and electrons transport. As this improved version of the code can be of general use, the implementation and the procedures to employ the new version of the code are discussed.

Matt, Giorgio; Feroci, Marco; Rapisarda, Massimo; Costa, Enrico

1996-10-01

22

The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

2000-03-01

23

Coupled electron-photon radiation transport

Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.

Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.

2000-01-17

24

Recent advances in the Mercury Monte Carlo particle transport code

We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551 (United States)] [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551 (United States)

2013-07-01

25

Automated Monte Carlo biasing for photon-generated electrons near surfaces.

This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

2009-09-01

26

Photon transport in binary photonic lattices

NASA Astrophysics Data System (ADS)

We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.

Rodríguez-Lara, B. M.; Moya-Cessa, H.

2013-03-01

27

NASA Astrophysics Data System (ADS)

We investigate the high-energy charge dynamics of electrons and holes in the multiplication process of single photon avalanche diodes. The technologically important multiplication layer materials InP and In0.52Al0.48As, used in near infrared photon detectors, are analyzed and compared with GaAs. We use the full-band Monte Carlo technique to solve the Boltzmann transport equation which improves the state-of-the-art treatment of high-field carrier transport in the multiplication process. As a result of the computationally efficient treatment of the scattering rates and the parallel central processing unit power of modern computer clusters, the full-band Monte Carlo calculation of the breakdown characteristics has become feasible. The breakdown probability features a steeper rise versus the reverse bias for smaller multiplication layer widths for InP, In0.52Al0.48As, and GaAs. Both the time to avalanche breakdown and jitter decrease with shrinking size of the multiplication region for the three examined III-V semiconductors.

Dolgos, Denis; Meier, Hektor; Schenk, Andreas; Witzigmann, Bernd

2012-05-01

28

Parallel processing Monte Carlo radiation transport codes

Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.

McKinney, G.W.

1994-02-01

29

Overview of Monte Carlo radiation transport codes

The Radiation Safety Information Computational Center (RSICC) is the designated central repository of the United States Department of Energy (DOE) for nuclear software in radiation transport, safety, and shielding. Since the center was established in the early 60’s, there have been several Monte Carlo (MC) particle transport computer codes contributed by scientists from various countries. An overview of the neutron

B. L. Kirk; Bernadette Lugue

2010-01-01

30

Precise Monte Carlo Simulation of Single-Photon Detectors

We demonstrate the importance and utility of Monte Carlo simulation of single-photon detectors. Devising an optimal simulation is strongly influenced by the particular application because of the complexity of modern, avalanche-diode-based single-photon detectors.. Using a simple yet very demanding example of random number generation via detection of Poissonian photons exiting a beam splitter, we present a Monte Carlo simulation that faithfully reproduces the serial autocorrelation of random bits as a function of detection frequency over four orders of magnitude of the incident photon flux. We conjecture that this simulation approach can be easily modified for use in many other applications.

Mario Stip?evi?; Daniel J. Gauthier

2014-11-13

31

NOTE: An efficient framework for photon Monte Carlo treatment planning

NASA Astrophysics Data System (ADS)

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient. This work was presented in part at the First European Workshop on Monte Carlo Treatment Planning (EWG-MCTP) held in Gent, Belgium from 22 to 25 October 2006.

Fix, Michael K.; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J.

2007-09-01

32

A New Monte Carlo Method for Time-Dependent Neutrino Radiation Transport

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck & Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Ernazar Abdikamalov; Adam Burrows; Christian D. Ott; Frank Löffler; Evan O'Connor; Joshua C. Dolence; Erik Schnetter

2012-03-13

33

MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

Forster, R.A.; Little, R.C.; Briesmeister, J.F.

1989-01-01

34

Shield weight optimization using Monte Carlo transport calculations

NASA Technical Reports Server (NTRS)

Outlines are given of the theory used in FASTER-3 Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries. The code has the additional capability of calculating the minimum weight layered unit shield configuration which will meet a specified dose rate constraint. It includes the treatment of geometric regions bounded by quadratic and quardric surfaces with multiple radiation sources which have a specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. Results are presented for sample problems involving primary neutron and both primary and secondary photon transport in a spherical reactor shield configuration. These results include the optimization of the shield configuration.

Jordan, T. M.; Wohl, M. L.

1972-01-01

35

Monte Carlo simulation for the transport beamline

In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)] [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy)] [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)

2013-07-26

36

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

1999-01-01

37

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

38

Radiative transport for two-photon light

NASA Astrophysics Data System (ADS)

We consider the propagation of two-photon light in a random medium. We show that the Wigner transform of the two-photon amplitude obeys an equation that is analogous to the radiative transport equation for classical light. Using this result, we investigate the propagation of an entangled photon pair.

Markel, Vadim A.; Schotland, John C.

2014-09-01

39

Vertical Photon Transport in Cloud Remote Sensing Problems

NASA Technical Reports Server (NTRS)

Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

Platnick, S.

1999-01-01

40

Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems

NASA Technical Reports Server (NTRS)

The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.

Plantnick, Steven

1999-01-01

41

Low variance methods for Monte Carlo simulation of phonon transport

Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...

Péraud, Jean-Philippe M. (Jean-Philippe Michel)

2011-01-01

42

Monte Carlo treatment planning for photon and electron beams

NASA Astrophysics Data System (ADS)

During the last few decades, accuracy in photon and electron radiotherapy has increased substantially. This is partly due to enhanced linear accelerator technology, providing more flexibility in field definition (e.g. the usage of computer-controlled dynamic multileaf collimators), which led to intensity modulated radiotherapy (IMRT). Important improvements have also been made in the treatment planning process, more specifically in the dose calculations. Originally, dose calculations relied heavily on analytic, semi-analytic and empirical algorithms. The more accurate convolution/superposition codes use pre-calculated Monte Carlo dose "kernels" partly accounting for tissue density heterogeneities. It is generally recognized that the Monte Carlo method is able to increase accuracy even further. Since the second half of the 1990s, several Monte Carlo dose engines for radiotherapy treatment planning have been introduced. To enable the use of a Monte Carlo treatment planning (MCTP) dose engine in clinical circumstances, approximations have been introduced to limit the calculation time. In this paper, the literature on MCTP is reviewed, focussing on patient modeling, approximations in linear accelerator modeling and variance reduction techniques. An overview of published comparisons between MC dose engines and conventional dose calculations is provided for phantom studies and clinical examples, evaluating the added value of MCTP in the clinic. An overview of existing Monte Carlo dose engines and commercial MCTP systems is presented and some specific issues concerning the commissioning of a MCTP system are discussed.

Reynaert, N.; van der Marck, S. C.; Schaart, D. R.; Van der Zee, W.; Van Vliet-Vroegindeweij, C.; Tomsej, M.; Jansen, J.; Heijmen, B.; Coghe, M.; De Wagter, C.

2007-04-01

43

Errors in glass photon transport calculation

A calculational capability for photon sources and photon transport in a reactor lattice was added to the GLASS system in 1973. The calculation has been used in a variety of applications since 1973, and has always produced results that appear reasonable. The GLASS photon transport calculation, however, was never compared to an independent photon transport calculation at any state of its development. Recently, the GLASS calculation was compared to calculations performed by the SHIELD system module SNONE (SHIELD system version of LASL DTF-IV code) and significant differences were found in the calculation of deposited photon heat. This led to discovery of certain errors in the GLASS calculations, as discussed in this report.

Finch, D.R.

1981-03-03

44

At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig.

Cullen, D.E.; Perkins, S.T.; Plechaty, E.F.; Rathkopf, J.A.

1988-06-01

45

Transport of photons produced by lightning in clouds

NASA Technical Reports Server (NTRS)

The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.

Solakiewicz, Richard

1991-01-01

46

Thread Divergence and Photon Transport on the GPU (U). LA-UR-13-27057

NASA Astrophysics Data System (ADS)

Monte Carlo methods are commonly used to solve numerically the particle transport problems. A major disadvantage to Monte Carlo methods is the time required to obtain accurate solutions. Graphical Processing Units (GPUs) have increased in use as accelerators for improving performance in high-performance computing. Extracting the best performance from GPUs places requires careful consideration on code execution and data movement. In particular, performance can be reduced if threads diverge due to branching, and Monte Carlo codes are susceptible to branching penalties. We explore different schemes to reduce thread divergence in photonics transport and report on our performance findings.

Aulwes, Rob T.; Zukaitis, Anthony

2014-06-01

47

Photonic sensor applications in transportation security

NASA Astrophysics Data System (ADS)

There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.

Krohn, David A.

2007-09-01

48

A generic algorithm for Monte Carlo simulation of proton transport

NASA Astrophysics Data System (ADS)

A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.

Salvat, Francesc

2013-12-01

49

A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

Bal, Guillaume, E-mail: gb2030@columbia.edu [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States); Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Mail Stop 169-237, Pasadena, CA 91109 (United States); Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030 (United States); Langmore, Ian, E-mail: ianlangmore@gmail.com [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States)

2011-08-20

50

Electromagnetic energy transport in finite photonic structures.

We have derived, for oblique propagation, an equation relating the averaged energy flux density to energy fluxes arising in the process of scattering by a lossless finite photonic structure. The latter fluxes include those associated with the dispersion relation of the structure, reflection, and interference between the incident and reflected waves. We have also derived an explicit relation between the energy flux density and the group velocity, which provides a simple and systematical procedure for studying theoretically and experimentally the properties of the energy transport through a wide variety of finite photonic structures. Such a relation may be regarded as a generalization of the corresponding one for infinite periodic systems to finite photonic structures. A finite, N-period, photonic crystal was used to illustrate the usefulness of our results. PMID:24921471

de Dios-Leyva, M; Duque, C A; Drake-Pérez, J C

2014-06-01

51

Computational methods of electron/photon transport

A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated.

Mack, J.M.

1983-01-01

52

Coupled proton/neutron transport calculations using the S sub N and Monte Carlo methods

Coupled charged/neutral article transport calculations are most often carried out using the Monte Carol technique. For example, the ITS, EGS, and MCNP (Version 4) codes are used extensively for electron/photon transport calculations while HETC models the transport of protons, neutrons and heavy ions. In recent years there has been considerable progress in deterministic models of electron transport, and many of these models are applicable to protons. However, even with these new models (and the well established models for neutron transport) deterministic coupled neutron/proton transport calculations have not been feasible for most problems of interest, due to a lack of coupled multigroup neutron/proton cross section sets. Such cross sections sets are now being developed at Los Alamos. Using these cross sections we have carried out coupled proton/neutron transport calculations using both the S{sub N} and Monte Carlo methods. The S{sub N} calculations used a code called SMARTEPANTS (simulating many accumulative Rutherford trajectories, electron, proton and neutral transport slover) while the Monte Carlo calculations are done with the multigroup option of the MCNP code. Both SMARTEPANTS and MCNP require standard multigroup cross section libraries. HETC on the other hand, avoids the need for precalculated nuclear cross sections by modeling individual nucleon collisions as the transporting neutrons and protons interact with nuclei. 21 refs., 1 fig.

Filippone, W.L. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Little, R.C.; Morel, J.E.; MacFarlane, R.E.; Young, P.G. (Los Alamos National Lab., NM (USA))

1991-01-01

53

A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport

A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

Guillaume Bal; Anthony Davis; Ian Langmore

2011-05-07

54

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the {sup 60}Co source and radiated isotropically in a solid angle of 4{pi}. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the {sup 60}Co source and the beam channel also contribute to the output spectra.

Cheung, Joel Y.C.; Yu, K.N. [Gamma Knife Centre, Canossa Hospital, 1 Old Peak Road, Hong Kong (China); Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong (China)

2006-01-15

55

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the 60Co source and radiated isotropically in a solid angle of 4pi. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the 60Co source and the beam channel also contribute to the output spectra. PMID:16485407

Cheung, Joel Y C; Yu, K N

2006-01-01

56

Efficient photon treatment planning by the use of Swiss Monte Carlo Plan

NASA Astrophysics Data System (ADS)

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) usually can only be performed using a cumbersome multi-step procedure where many user interactions are needed. Automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new GUI-based photon MC environment has been developed resulting in a very flexible framework, namely the Swiss Monte Carlo Plan (SMCP). Appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment the MC particle transport has been divided into different parts: source, beam modifiers, and patient. The source part includes: Phase space-source, source models, and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory, hence no files are used as interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, three patient cases are shown. Thereby, comparisons between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.

Fix, M. K.; Manser, P.; Frei, D.; Volken, W.; Mini, R.; Born, E. J.

2007-06-01

57

Monte Carlo simulation accuracy for calibrating germanium detector photon efficiency

Over the past 30 years, Monte Carlo simulation of photons interacting with matter has gradually improved to the extent that it now appears suitable for calibrating germanium detectors for counting efficiency in gamma-ray spectral analysis. The process is particularly useful because it can be applied for a variety of source shapes and spatial relations between source and detector by simply redefining the geometry, whereas calibration with radioactive standards requires a separate set of measurements for each source shape and location relative to the detector. Simulation accuracy was evaluated for two large (126% and 110%) and one medium-sized (20%) detectors with radioactive point sources at distances of 10 m, 1.6 m, and 0.50 m and with aqueous solutions in a 0.5-L reentrant beaker and in jars of similar volume but various dimensions. The sensitivity in comparing measured and simulated results was limited by a combined uncertainty of about 3% in the radioactive standards and experimental conditions. Simulation was performed with the MCNP-4 code.

Kamboj, Sunita; Kahn, B.

1997-08-01

58

ELECTRON\\/PHOTON TRANSPORT AND ITS APPLICATIONS

This paper surveys the wide range of radiation physics topics that involve the transport of energetic electrons and x-rays. Applications in the high-energy range (100 keV to 30 MeV) include: radiation therapy physics (including treatment planning), industrial radiation processing of materials, shielding, experimental and theoretical dosimetry, dose profiles near material interfaces, beta-ray dosimetry, characterization of the photon spectrum from radioisotope

John C. Garth

2005-01-01

59

MC++: Parallel, portable, Monte Carlo neutron transport in C++

We have developed an implicit Monte Carlo neutron transport code in C++ using the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and parallelism. Current capabilities of MC++ are discussed, along with future plans and physics and performance results on many different platforms.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A& M Univ., College Station, TX (United States). Dept. of Nuclear Engineering

1997-02-01

60

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

2008-11-20

61

Optix: A Monte Carlo scintillation light transport code

NASA Astrophysics Data System (ADS)

The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.

Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi

2014-02-01

62

Thermoelectric transport perpendicular to thin-film heterostructures calculated using the Monte The Monte Carlo technique is used to calculate electrical as well as thermoelectric transport properties ballistic thermionic transport and fully diffusive thermoelectric transport is also described. DOI: 10

63

This paper describes the characterization of radiation doses to the hands of nuclear medicine technicians resulting from the handling of radiopharmaceuticals. Radiation monitoring using ring dosimeters indicates that finger dosimeters that are used to show compliance with applicable regulations may overestimate or underestimate radiation doses to the skin depending on the nature of the particular procedure and the radionuclide being handled. To better understand the parameters governing the absorbed dose distributions, a detailed model of the hands was created and used in Monte Carlo simulations of selected nuclear medicine procedures. Simulations of realistic configurations typical for workers handling radiopharmaceuticals were performedfor a range of energies of the source photons. The lack of charged-particle equilibrium necessitated full photon-electron coupled transport calculations. The results show that the dose to different regions of the fingers can differ substantially from dosimeter readings when dosimeters are located at the base of the finger. We tried to identify consistent patterns that relate the actual dose to the dosimeter readings. These patterns depend on the specific work conditions and can be used to better assess the absorbed dose to different regions of the exposed skin.

Ilas, Dan [ORNL] [ORNL; Eckerman, Keith F [ORNL] [ORNL; Karagiannis, Harriet [ORNL] [ORNL

2009-01-01

64

Specific Absorbed Fractions of Electrons and Photons for Rad-HUMAN Phantom Using Monte Carlo Method

The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photon and electron were calculated using the Rad-HUMAN phantom, a computational voxel phantom of Chinese adult female and created using the color photographic image of the Chinese Visible Human (CVH) data set. The model can represent most of Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10keV to 4MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from Rad-HUMAN have the similar trends but larger than those from the other two models. The differences were due to the racial and anatomical differences in o...

Wang, Wen; Long, Peng-cheng; Hu, Li-qin

2014-01-01

65

Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation.

The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head sinulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams. PMID:15272680

Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C M

2004-06-21

66

Using Monte Carlo simulations to commission photon beam output factors—a feasibility study

NASA Astrophysics Data System (ADS)

This study investigates the feasibility of using Monte Carlo methods to assist the commissioning of photon beam output factors from a medical accelerator. The Monte Carlo code, BEAMnrc, was used to model 6 MV and 18 MV photon beams from a Varian linear accelerator. When excellent agreements were obtained between the Monte Carlo simulated and measured dose distributions in a water phantom, the entire geometry including the accelerator head and the water phantom was simulated to calculate the relative output factors. Simulated output factors were compared with measured data, which consist of a typical commission dataset for the output factors. The measurements were done using an ionization chamber in a water phantom at a depth of 10 cm with a source-detector distance of 100 cm. Square fields and rectangular fields with widths and lengths ranging from 4 cm to 40 cm were studied. The result shows a very good agreement (<1.5%) between the Monte Carlo calculated and the measured relative output factors for a typical commissioning dataset. The Monte Carlo calculated backscatter factors to the beam monitor chamber agree well with measured data in the literature. Monte Carlo simulations have also been shown to be able to accurately predict the collimator exchange effect and its component for rectangular fields. The information obtained is also useful to develop an algorithm for accurate beam modelling. This investigation indicates that Monte Carlo methods can be used to assist commissioning of output factors for photon beams.

Ding, George X.

2003-12-01

67

Using Monte Carlo simulations to commission photon beam output factors--a feasibility study.

This study investigates the feasibility of using Monte Carlo methods to assist the commissioning of photon beam output factors from a medical accelerator. The Monte Carlo code, BEAMnrc, was used to model 6 MV and 18 MV photon beams from a Varian linear accelerator. When excellent agreements were obtained between the Monte Carlo simulated and measured dose distributions in a water phantom, the entire geometry including the accelerator head and the water phantom was simulated to calculate the relative output factors. Simulated output factors were compared with measured data, which consist of a typical commission dataset for the output factors. The measurements were done using an ionization chamber in a water phantom at a depth of 10 cm with a source-detector distance of 100 cm. Square fields and rectangular fields with widths and lengths ranging from 4 cm to 40 cm were studied. The result shows a very good agreement (< 1.5%) between the Monte Carlo calculated and the measured relative output factors for a typical commissioning dataset. The Monte Carlo calculated backscatter factors to the beam monitor chamber agree well with measured data in the literature. Monte Carlo simulations have also been shown to be able to accurately predict the collimator exchange effect and its component for rectangular fields. The information obtained is also useful to develop an algorithm for accurate beam modelling. This investigation indicates that Monte Carlo methods can be used to assist commissioning of output factors for photon beams. PMID:14703163

Ding, George X

2003-12-01

68

Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

NASA Astrophysics Data System (ADS)

Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

Pratx, Guillem; Xing, Lei

2011-12-01

69

Validation of the MCNP4B code in the coupled transport of photons and neutrons

MCNP4B is a Monte Carlo code widely used in nuclear physics. In some applications like calculations of accelerator shieldings it is important to determine the neutronic dose due to photonuclear reactions. The gamma-neutron patch version 1.0 adds such reactions to the base code (not primarily included), enabling the coupled transport of photons neutrons and electrons. In this report the patch

Rafael Díaz Heredia; Judith Gallego Blanco; Mario Santana Leitner

70

Topologically Robust Transport of Photons in a Synthetic Gauge Field

NASA Astrophysics Data System (ADS)

Electronic transport is localized in low-dimensional disordered media. The addition of gauge fields to disordered media leads to fundamental changes in the transport properties. We implement a synthetic gauge field for photons using silicon-on-insulator technology. By determining the distribution of transport properties, we confirm that waves are localized in the bulk and localization is suppressed in edge states. Our system provides a new platform for investigating the transport properties of photons in the presence of synthetic gauge fields.

Mittal, S.; Fan, J.; Faez, S.; Migdall, A.; Taylor, J. M.; Hafezi, M.

2014-08-01

71

Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...

Romano, Paul K. (Paul Kollath)

2013-01-01

72

Photon beam description in PEREGRINE for Monte Carlo dose calculations

Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte

Cox

1997-01-01

73

Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

NASA Astrophysics Data System (ADS)

This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

Slattery, Stuart R.

74

Control of single-photon transport in a one-dimensional waveguide by a single photon

NASA Astrophysics Data System (ADS)

We study controllable single-photon transport in a one-dimensional waveguide with a nonlinear dispersion relation coupled to a three-level emitter in a cascade configuration. An extra cavity field is introduced to drive one of the level transitions of the emitter. In the resonance case, when the extra cavity does not contain photons, the input single photon will be reflected; when the cavity contains one photon, the full transmission of the input single photon can be obtained. In the off-resonance case, the single-photon transport can also be controlled by the parameters of the cavity. Therefore, we show that single-photon transport can be controlled by an extra cavity field.

Yan, Wei-Bin; Fan, Heng

2014-11-01

75

Modeling photon transport in transabdominal fetal oximetry

NASA Astrophysics Data System (ADS)

The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.

Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton

2000-07-01

76

Current status of the PSG Monte Carlo neutron transport code

PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

Leppaenen, J. [VTT Technical Research Centre of Finland, Laempoemiehenkuja 3, Espoo, FI-02044 VTT (Finland)

2006-07-01

77

Photon transport in dense plant canopies

With the main challenges in the numerical solution of the neutron transport equation having largely been met (i.e., multidimensional geometries, full matrix scattering, time dependence, etc.), numerical transport specialists have sought new applications with new transport equations. One such application is radiative transfer associated with satellite remote sensing for environmental monitoring. To reliably apply remote sensing techniques to obtain information about plant coverings (called plant canopies), an understanding of how radiant energy interacts with the elements of the plant canopy is essential. For example, in the investigation of the photosynthesis process and leaf respiration, plant physiologists are primarily concerned with the complex biochemistry driven by radiant energy in the visible portion of the sun`s spectrum. The agronomist, on the other hand, is concerned with how the features of the canopy influence the biochemical processes to promote photosynthesis. For both applications, knowledge of the amount photosynthesis. For both applications, knowledge of the amount of radiant energy input and the amount and wavelength spectrum of the reflected radiant energy is required. Currently, there are several approaches leading to estimates of the canopy reflectance (the angular response at the canopy surface), and one common formulation is the solution to the appropriate radiative transfer equation. The solution to the radiative transfer equation is complicated by the complexity of photon scattering interaction. In this paper, the solution to the one-angle radiative transfer equation for a dense canopy, with leaves assumed to scatter as Lambertian surfaces, is solved using a technique originally applied to the conventional radiative transfer equation.

Ganapol, B.D.

1994-12-31

78

Optimization of Monte Carlo transport simulations in stochastic media

This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

Liang, C.; Ji, W. [Dept. of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Inst., 110 8th street, Troy, NY (United States)

2012-07-01

79

Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors

Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors Mona exclusion principle in Monte Carlo simulations. This algorithm has significant advantages to implement the scattering rate. The ensemble Monte Carlo MC simulation is accepted as a powerful numerical technique

80

Monte Carlo Characterization of a Highly Efficient Photon Detector

Highly efficient photon detectors play a major role in countless applications in physics, nuclear engineering, and medical physics. In nuclear engineering, radioactive waste can be characterized with techniques such as the nondestructive assay technique (PNDA). In medical physics, photon detectors are extensively used for diagnostic X-ray and computerized tomography (CT) imaging, nuclear medicine, and quite recently radiation therapy of cancer.1,2 In radiation therapy of cancer, ever more accurate delivery techniques spur the need for efficient detectors of the high-energetic photons in the mega-electron-volt energy range in order to allow the imaging of the patient during radiation delivery. In particular, in tomotherapy, a megavoltage detector is used for both CT imaging and verifying the dose received by the patients. Conventional megavoltage detection systems usually suffer from intrinsically low subject contrast.2 A high signal-to-noise ratio of the detection system can be achieved by keeping the noise as low as possible and/or by increasing the quantum efficiency of the detector. In this work, a candidate of a highly efficient detection system, i.e., an arc-shaped xenon gas ionization chamber, was characterized in terms of efficiency and spatial resolution.

Harry Keller; M. Glass; R. Hinderer; K. Ruchala; R. Jeraj; G. Olivera; T. R. Mackie; M. L. Corradini

2001-06-17

81

Highly Confined Photon Transport in Subwavelength Metallic Slot Waveguides

that "squeeze" photonic modes into subwavelength volumes.5-9 Such plasmonic modes are characterizedHighly Confined Photon Transport in Subwavelength Metallic Slot Waveguides J. A. Dionne,*, H. J ABSTRACT We report experimental realization of subwavelength slot waveguides that exhibit both micrometer

Atwater, Harry

82

Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

Smith, L.M.; Hochstedler, R.D. [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering] [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering

1997-02-01

83

Acceleration of a Monte Carlo radiation transport code

Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

Hochstedler, R.D.; Smith, L.M. [The University of Tennessee Space Institute, B. H. Goethert Parkway, MS 21, Tullahoma, Tennessee 37388-8897 (United States)

1996-03-01

84

Resonance fluorescence near a photonic band edge: Dressed-state Monte Carlo wave-function approach

NASA Astrophysics Data System (ADS)

We introduce a dressed-state Monte Carlo wave-function technique to describe resonance fluorescence in a broad class of non-Markovian reservoirs with strong atom-reservoir interaction. The method recaptures photon localization effects which are beyond the Born and Markovian approximations, and describes the influence of the driving field on the atom-reservoir interaction. Using this approach, we predict a number of fundamentally new features in resonance fluorescence near the edge of a photonic band gap. In particular, the atomic population exhibits inversion for moderate applied field intensity. For a low external field intensity, the atomic system retains a long-time memory of its initial state.

Quang, Tran; John, Sajeev

1997-11-01

85

The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of detector responses and dose conversion factors, as well as increases of the RBE have to be anticipated. Parameter P(D)(200 keV) can also be used as a guidance supporting the selection of a calibration geometry suitable for radiation dosimeters to be used in small radiation fields. PMID:21530198

Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

2011-09-01

86

In this work an ant colony algorithm has been implemented to optimize the Monte Carlo simulation, with the PENELOPE code,\\u000a of small photon beams used in certain radiotherapy treatments. The ant colony method is an artificial intelligence algorithm\\u000a that automates the application of variance-reduction techniques throughout a clinical linear accelerator. The variance-reduction\\u000a techniques employed are Russian roulette and particle splitting,

S. García-Pareja; F. Manzano; P. Galán; A. M. Lallena; L. Brualla

87

Direct photon emission from hadronic sources: Hydrodynamics vs. Transport theory

Direct photon emission in heavy-ion collisions is calculated within the relativistic microscopic transport model UrQMD. We compare the results from the pure transport calculation to a hybrid-model calculation, where the high-density part of the evolution is replaced by an ideal 3-dimensional fluiddynamic calculation. The effects of viscosity, present in the transport model but neglected in ideal fluid-dynamics, are examined. We study the contribution of different production channels and non-thermal collisions to the spectrum of direct photons. Detailed comparison to the measurements by the WA~98-collaboration are undertaken.

Bjoern Baeuchle; Marcus Bleicher

2008-10-02

88

Photonic quasicrystals: Disorder-enhanced light transport

NASA Astrophysics Data System (ADS)

Photonic quasicrystals are specially designed aperiodic materials that possess long-range order and are capable of transmitting light. Contrary to intuition, introducing disorder can be used to enhance the propagation of light through such labyrinth structures.

Vardeny, Z. Valy; Nahata, Ajay

2011-08-01

89

Topologically Robust Transport of Photons in a Synthetic Gauge Field

Electronic transport in low dimensions through a disordered medium leads to localization. The addition of gauge fields to disordered media leads to fundamental changes in the transport properties. For example, chiral edge states can emerge in two-dimensional systems with a perpendicular magnetic field. Here, we implement a "synthetic'' gauge field for photons using silicon-on-insulator technology. By determining the distribution of transport properties, we confirm the localized transport in the bulk and the suppression of localization in edge states, using the "gold standard'' for localization studies. Our system provides a new platform to investigate transport properties in the presence of synthetic gauge fields, which is important both from the fundamental perspective of studying photonic transport and for applications in classical and quantum information processing.

Mittal, S; Faez, S; Migdall, A; Taylor, J M; Hafezi, M

2014-01-01

90

Topologically Robust Transport of Photons in a Synthetic Gauge Field

Electronic transport in low dimensions through a disordered medium leads to localization. The addition of gauge fields to disordered media leads to fundamental changes in the transport properties. For example, chiral edge states can emerge in two-dimensional systems with a perpendicular magnetic field. Here, we implement a "synthetic'' gauge field for photons using silicon-on-insulator technology. By determining the distribution of transport properties, we confirm the localized transport in the bulk and the suppression of localization in edge states, using the "gold standard'' for localization studies. Our system provides a new platform to investigate transport properties in the presence of synthetic gauge fields, which is important both from the fundamental perspective of studying photonic transport and for applications in classical and quantum information processing.

S. Mittal; J. Fan; S. Faez; A. Migdall; J. M. Taylor; M. Hafezi

2014-03-31

91

Significantly improved upon its predecessor PHOTON, STAC8 is a valuable analytic code for quick and conservative beamline shielding designs for synchrotron radiation (SR) facilities. To check the applicability, accuracy, and limitations of STAC8, studies were conducted to compare STAC8 and PHOTON results with calculations using the FLUKA and EGS4 Monte Carlo codes. Doses and spectra for scattered SR in a

J. C. Liu; A. Fasso; A. Prinz; S. Rokni; Y. Asano

2004-01-01

92

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA

K. N. Yu; Joel Y. C. Cheung

2006-01-01

93

Photonic quantum transport in a nonlinear optical fiber

We theoretically study the transmission of few-photon quantum fields through a strongly nonlinear optical medium. We develop a general approach to investigate non-equilibrium quantum transport of bosonic fields through a finite-size nonlinear medium and apply it to a recently demonstrated experimental system where cold atoms are loaded in a hollow-core optical fiber. We show that when the interaction between photons is effectively repulsive, the system acts as a single-photon switch. In the case of attractive interaction, the system can exhibit either anti-bunching or bunching, associated with the resonant excitation of bound states of photons by the input field. These effects can be observed by probing statistics of photons transmitted through the nonlinear fiber.

Mohammad Hafezi; Darrick E. Chang; Vladimir Gritsev; Eugene Demler; Mikhail D. Lukin

2009-07-29

94

A Monte Carlo model of photon beams used in radiation therapy.

A generic Monte Carlo model of a photon therapy machine is described. The model, known as McRad, is based on EGS4 and has been in use since 1991. Its primary function has been the characterization of the incident photon fluence for use by dose calculation algorithms. The accuracy of McRad is examined by comparing the dose distributions in a water phantom generated using only the Monte Carlo data with measured dose distributions for two machines in our clinic; a 6 MV Varian Clinac 600C and the 15 MV beam from a Clinac 2100C. The Monte Carlo generated dose distributions are computed using a dose calculation algorithm based on the use of differential pencil beam kernels. It was found that the match to measured data could be improved if the model is tuned by adjusting the energy of the electron beam incident on the target. The beam profiles were found to be more sensitive indicators of the electron beam energy than the depth dose curves. Beyond the depths reached by contaminant electrons, the computed and measured depth dose curves agree to better than 1%. The comparison of beam profiles indicate that in regions up to within 1 cm of the field edge, the measured and computed doses generally agree to within 2%-3%. PMID:8531863

Lovelock, D M; Chui, C S; Mohan, R

1995-09-01

95

Monte Carlo based dose calculation algorithms require input data or distributions describing the phase space of the photons and secondary electrons prior to the patient-dependent part of the beam-line geometry. The accuracy of the treatment plan itself is dependent upon the accuracy of this distribution. The purpose of this work is to compare phase space distributions (PSDs) generated with the MCNP4b and EGS4 Monte Carlo codes for the 6 and 18 MV photon modes of the Varian 2100C and determine if differences relevant to Monte Carlo based patient dose calculations exist. Calculations are performed with the same energy transport cut-off values. At 6 MV, target bremsstrahlung production for MCNP4b is approximately 10% less than for EGS4, while at 18 MV the difference is about 5%. These differences are due to the different bremsstrahlung cross sections used in the codes. Although the absolute bremsstrahlung production differs between MCNP4b and EGS4, normalized PSDs agree at the end of the patient-independent geometry (prior to the jaws), resulting in similar dose distributions in a homogeneous phantom. EGS4 and MCNP4b are equally suitable for the generation of PSDs for Monte Carlo based dose computations. PMID:10616151

Siebers, J V; Keall, P J; Libby, B; Mohan, R

1999-12-01

96

Monte Carlo study of photon fields from a flattening filter-free clinical accelerator.

In conventional clinical linear accelerators, the flattening filter scatters and absorbs a large fraction of primary photons. Increasing the beam-on time, which also increases the out-of-field exposure to patients, compensates for the reduction in photon fluence. In recent years, intensity modulated radiation therapy has been introduced, yielding better dose distributions than conventional three-dimensional conformal therapy. The drawback of this method is the further increase in beam-on time. An accelerator with the flattening filter removed, which would increase photon fluence greatly, could deliver considerably higher dose rates. The objective of the present study is to investigate the dosimetric properties of 6 and 18 MV photon beams from an accelerator without a flattening filter. The dosimetric data were generated using the Monte Carlo programs BEAMnrc and DOSXYZnrc. The accelerator model was based on the Varian Clinac 2100 design. We compared depth doses, dose rates, lateral profiles, doses outside collimation, total and collimator scatter factors for an accelerator with and without a flatteneing filter. The study showed that removing the filter increased the dose rate on the central axis by a factor of 2.31 (6 MV) and 5.45 (18 MV) at a given target current. Because the flattening filter is a major source of head scatter photons, its removal from the beam line could reduce the out-of-field dose. PMID:16696457

Vassiliev, Oleg N; Titt, Uwe; Kry, Stephen F; Pönisch, Falk; Gillin, Michael T; Mohan, Radhe

2006-04-01

97

Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector Radiative Transport Equation (vRTE). Monte Carlo integration of the vRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries and because it can intuitively incorporate elaborate physics. Aims. We present a novel Pre-Conditioned Backward Monte Carlo (PBMC) algorithm for solving the vRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods. We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly-polarising media such as Rayleigh atmospheres. The numerical difficulty is avoid...

Muñoz, García; Mills,; P, F

2014-01-01

98

We present a Monte Carlo method for obtaining solutions of the Boltzmann equation to describe phonon transport in micro- and nanoscale devices. The proposed method can resolve arbitrarily small signals (e.g., temperature ...

Peraud, Jean-Philippe Michel

99

At terrestrial high latitudes, the plasma flows along ``open'' field lines, gradually going from a collision-dominated region into a collisionless region. Over several decades, the (fluid-like) generalized transport equations, TE, and the particle-based Monte Carlo, MC, approaches evolved as two of the most powerful simulation techniques that address this problem. In contrast to the computationally intensive Monte Carlo, the transport

J. Ji; A. R. Barakat; R. W. Schunk

2009-01-01

100

TWOGEN—a simple Monte Carlo generator for two-photon reactions

NASA Astrophysics Data System (ADS)

TWOGEN samples the transverse-transverse luminosity function for real and virtual photons, then weights events with any user-supplied cross section ? TT(?? ? X) in a "hit or miss" sampling, using a simple method to avoid the singularity at the minimum electron angle ?min=0. Events within defined kinematic limits are accepted and the corresponding cross section is estimated, together with the effective luminosity of the run. Functions implemented for ? TT include the production cross sections for lepton pairs, for the formation of narrow resonances and for hadron production in tagged, deep inelastic scattering. A comparison is made with an existing Monte Carlo generator and a simple analytic approximation.

Buijs, A.; Langeveld, W. G. J.; Lehto, M. H.; Miller, D. J.

1994-05-01

101

NASA Astrophysics Data System (ADS)

Three-dimensional Monte Carlo coupled electron-photon-positron transport calculations are often performed to determine characteristics such as energy or charge deposition in a wide range of systems exposed to radiation field such as electronic circuitry in a space-environment, tissues exposed to radiotherapy linear accelerator beams, or radiation detectors. Modeling these systems constitute a challenging problem for the available computational methods and resources because they can involve; (i) very large attenuation, (ii) large number of secondary particles due to the electron-photon-positron cascade, and (iii) large and highly forward-peaked scattering. This work presents a new automated variance reduction technique, referred to as ADEIS (Angular adjoint-Driven Electron-photon-positron Importance Sampling), that takes advantage of the capability of deterministic methods to rapidly provide approximate information about the complete phase-space in order to automatically evaluate variance reduction parameters. More specifically, this work focuses on the use of discrete ordinates importance functions to evaluate angular transport and collision biasing parameters, and use them through a modified implementation of the weight-window technique. The application of this new method to complex Monte Carlo simulations has resulted in speedups as high as five orders of magnitude. Due to numerical difficulties in obtaining physical importance functions devoid of numerical artifacts, a limited form of smoothing was implemented to complement a scheme for automatic discretization parameters selection. This scheme improves the robustness, efficiency and statistical reliability of the methodology by optimizing the accuracy of the importance functions with respect to the additional computational cost from generating and using these functions. It was shown that it is essential to bias different species of particles with their specific importance functions. In the case of electrons and positrons, even though the physical scattering and energy-loss models are similar, the importance of positrons can be many orders of magnitudes larger than electron importance. More specifically, not explicitly biasing the positrons with their own set of importance functions results in an undersampling of the annihilation photons and, consequently, introduces a bias in the photon energy spectra. It was also shown that the implementation of the weight-window technique within the condensed-history algorithm of a Monte Carlo code requires that the biasing be performed at the end of each major energy step. Applying the weight-window earlier into the step, i.e., before the last substep, will result in a biased electron energy spectrum. This bias is a consequence of systematic errors introduced in the energy-loss prediction due to an inappropriate application of the weight-window technique where the actual path-length differs from the pre-determined path-length used for evaluating the energy-loss straggling distribution.

Dionne, Benoit

102

NASA Technical Reports Server (NTRS)

A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

Jordan, T. M.

1970-01-01

103

The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries. PMID:21147784

Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K

2011-11-01

104

A Monte Carlo study of radiation transport through multileaf collimators.

Due to the significant increase in the number of monitor units used to deliver a dynamic IMRT treatment, the total MLC leakage (transmission plus scatter) can exceed 10% of the maximum in-field dose. To avoid dosimetric errors, this leakage must be accurately accounted for in the dose calculation and conversion of optimized intensity patterns to MLC trajectories used for treatment delivery. In this study, we characterized the leaf end transmission and leakage radiation for Varian 80- and 120-leaf MLCs using Monte Carlo simulations. The complex geometry of the MLC, including the rounded leaf end, leaf edges (tongue-and-groove and offset notch), mounting slots, and holes was modeled using MCNP4b. Studies were undertaken to determine the leakage as a function of field size, components of the leakage, electron contamination, beam hardening and leaf tip effects. The leakage radiation with the MLC configured to fully block the field was determined. Dose for 6 and 18 MV beams was calculated at 5 cm depth in a water phantom located at 95 cm SSD, and normalized to the dose for an open field. Dose components were scored separately for radiation transmitted through and scattered from the MLC. For the 80-leaf MLC at 6 MV, the average leakage dose is 1.6%, 1.7%, 1.8%, and 1.9% for 5 x 5, 10 x 10, 15 x 15, and 20 x 20cm2 fields, respectively. For the 120-leaf MLC at 6 MV, the average leakage dose is 1.6%, 1.6%, 1.7%, and 1.9% for the same field sizes. Measured leakage values for the 120-leaf MLC agreed with calculated values to within 0.1% of the open field dose. The increased leakage with field size is attributed to MLC scattered radiation. The fractional electron contamination for a blocked MLC field is greater than that for an open field. The MLC attenuation significantly affects the photon spectrum, resulting in an increase in percent depth dose at 6 MV, however, little effect is observed at 18 MV. Both phantom scatter and the finite source size contribute to the leaf tip profile observed in phantom. The results of this paper can be applied to fluence-to-trajectory and trajectory-to-fluence calculations for IMRT. PMID:11797953

Kim, J O; Siebers, J V; Keall, P J; Arnfield, M R; Mohan, R

2001-12-01

105

FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation

The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.

Hackel, B M; Nielsen Jr., D E; Procassini, R J

2009-02-25

106

Analysis of multicasting in photonic transport networks

The introduction of multicasting in optical transport networks is analysed and discussed. Three different optical path realisation techniques, involving the multicasting function, are examined: the Multicast Wavelength Path (MWP), the Multicast Virtual Wavelength Path (MVWP) and the Partial Multicast Virtual Wavelength Path (PVWP). A performance evaluation model is reported and results of the performance analysis are discussed. In addition several

Eugenio Iannone; Marco Listanti; Roberto Sabella

1996-01-01

107

Significantly improved upon its predecessor PHOTON, STAC8 is a valuable analytic code for quick and conservative beamline shielding designs for synchrotron radiation (SR) facilities. To check the applicability, accuracy, and limitations of STAC8, studies were conducted to compare STAC8 and PHOTON results with calculations using the FLUKA and EGS4 Monte Carlo codes. Doses and spectra for scattered SR in a few beam-target-shield geometries were calculated, with and without photon linear polarization effects. Areas for expanding the STAC8 capabilities, e.g., features of the mirror-reflected lights and double-Compton light calculations, and use of monochromatic light, etc., have been identified. Some of these features have been implemented and benchmarked against Monte Carlo calculations. Reasonable agreements were found between the STAC8 and Monte Carlo calculations.

Liu, J

2004-07-12

108

Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.

Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)

2012-04-01

109

This paper presents the findings of an investigation into the Monte Carlo simulation of superficial cancer treatments of an internal canthus site using both kilovoltage photons and megavoltage electrons. The EGSnrc system of codes for the Monte Carlo simulation of the transport of electrons and photons through a phantom representative of either a water phantom or treatment site in a patient is utilised. Two clinical treatment units are simulated: the Varian Medical Systems Clinac 2100C accelerator for 6 MeV electron fields and the Pantak Therapax SXT 150 X-ray unit for 100 kVp photon fields. Depth dose, profile and isodose curves for these simulated units are compared against those measured by ion chamber in a PTW Freiburg MP3 water phantom. Good agreement was achieved away from the surface of the phantom between simulated and measured data. Dose distributions are determined for both kV photon and MeV electron fields in the internal canthus site containing lead and tungsten shielding, rapidly sloping surfaces and different density interfaces. There is a relatively high level of deposition of dose in tissue-bone and tissue-cartilage interfaces in the kV photon fields in contrast to the MeV electron fields. This is reflected in the maximum doses in the PTV of the internal canthus field being 12 Gy for kV photons and 4.8 Gy for MeV electrons. From the dose distributions, DVH and dose comparators are used to assess the simulated treatment fields. Any indication as to which modality is preferable to treat the internal canthus requires careful consideration of many different factors, this investigation provides further perspective in being able to assess which modality is appropriate. PMID:19623857

Currie, B E

2009-06-01

110

NASA Astrophysics Data System (ADS)

The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.

Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.

2008-09-01

111

Photon beam relative dose validation of the DPM Monte Carlo code in lung-equivalent media.

Validation experiments have been conducted using 6 and 15 MV photons in inhomogeneous (water/lung/water) media to benchmark the accuracy of the DPM Monte Carlo code for photon beam dose calculations. Small field sizes (down to 2 x 2 cm2) and low-density media were chosen for this investigation because the intent was to test the DPM code under conditions where lateral electronic disequilibrium effects are emphasized. The treatment head components of a Varian 21EX linear accelerator, including the jaws (defining field sizes of 2 x 2, 3 x 3 and 10 x 10 cm2), were simulated using the BEAMnrc code. The phase space files were integrated within the DPM code system, and central axis depth dose and profile calculations were compared against diode measurements in a homogeneous water phantom in order to validate the phase space. Results of the homogeneous phantom study indicated that the relative differences between DPM calculations and measurements were within +/- 1% (based on the rms deviation) for the depth dose curves; relative profile dose differences were on average within +/- 1%/1 mm. Depth dose and profile measurements were carried out using an ion-chamber and film, within an inhomogeneous phantom consisting of a 6 cm slab of lung-equivalent material embedded within solid water. For the inhomogeneous phantom experiment, DPM depth dose calculations were within +/- 1% (based on the rms deviation) of measurements; relative profile differences at depths within and beyond the lung were, on average, within +/- 2% in the inner and outer beam regions, and within 1-2 mm distance-to-agreement within the penumbral region. Relative point differences on the order of 2-3% were within the estimated experimental uncertainties. This work demonstrates that the DPM Monte Carlo code is capable of accurate photon beam dose calculations in situations where lateral electron disequilibrium effects are pronounced. PMID:12722808

Chetty, Indrin J; Charland, Paule M; Tyagi, Neelam; McShan, Daniel L; Fraass, Benedick A; Bielajew, Alex F

2003-04-01

112

Photonic transport control by spin-optical disordered metasurface

Photonic metasurfaces are ultrathin electromagnetic wave-molding metamaterials providing the missing link for the integration of nanophotonic chips with nanoelectronic circuits. An extra twist in this field originates from spin-optical metasurfaces providing the photon spin (polarization helicity) as an additional degree of freedom in light-matter interactions at the nanoscale. Here we report on a generic concept to control the photonic transport by disordered (random) metasurfaces with a custom-tailored geometric phase. This approach combines the peculiarity of random patterns to support extraordinary information capacity within the intrinsic limit of speckle noise, and the optical spin control in the geometric phase mechanism, simply implemented in two-dimensional structured matter. By manipulating the local orientations of anisotropic optical nanoantennas, we observe spin-dependent near-field and free-space open channels, generating state-of-the-art multiplexing and interconnects. Spin-optical disordered m...

Veksler, Dekel; Ozeri, Dror; Shitrit, Nir; Kleiner, Vladimir; Hasman, Erez

2014-01-01

113

We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle's traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

Glaser, Adam K; Kanick, Stephen C; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W

2013-05-01

114

We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.

2013-01-01

115

Interface dosimetry: measurements and Monte Carlo simulations of low-energy photon beams

NASA Astrophysics Data System (ADS)

A comparison of measured and simulated dose perturbations at high- Z interfaces with Monte Carlo (MC) codes, EGS4, MCNP4B , and PENELOPE, having varied algorithms is presented. The measured dose perturbations strongly depend on the chamber design and are always lower than the MC data. The EGS4 data are closer to the ion chamber values. The other two codes, MCNP4B and PENELOPE, predict relatively higher magnitude. The simulated secondary electron spectra from high- Z interfaces are different but cannot explain the differences in magnitude. It is concluded that MC codes capable of handling low-energy transport and better boundary crossing algorithms are needed for interface effects.

Das, Indra J.; Kassaee, Alireza; Verhaegen, Frank; Moskvin, Vadim P.

2001-06-01

116

LDRD project 151362 : low energy electron-photon transport.

At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.

Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James

2013-09-01

117

NASA Technical Reports Server (NTRS)

The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

Jordan, T. M.

1970-01-01

118

Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442

Bishop, Martin J.; Plank, Gernot

2014-01-01

119

Time-dependent transport of electrons through a photon cavity

NASA Astrophysics Data System (ADS)

We use a non-Markovian master equation to describe the transport of Coulomb-interacting electrons through an electromagnetic cavity with one quantized photon mode. The central system is a finite-parabolic quantum wire that is coupled weakly to external parabolic quasi-one-dimensional leads at t=0. With a stepwise introduction of complexity to the description of the system and a corresponding stepwise truncation of the ensuing many-body spaces, we are able to describe the time-dependent transport of Coulomb-interacting electrons through a geometrically complex central system. We take the full electromagnetic interaction of electrons and cavity photons without resorting to the rotating-wave approximation or reduction of the electron states to two levels into account. We observe that the number of initial cavity photons and their polarizations can have important effects on the transport properties of the system. The quasiparticles formed in the central system have lifetimes limited by the coupling to the leads and radiation processes active on a much longer time scale.

Gudmundsson, Vidar; Jonasson, Olafur; Tang, Chi-Shung; Goan, Hsi-Sheng; Manolescu, Andrei

2012-02-01

120

PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

Iandola, F N; O'Brien, M J; Procassini, R J

2010-11-29

121

Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

Miller, S.G.

1988-08-01

122

Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a

Carey E. Floyd; R. E. Coleman; R. J. Jaszczak

1985-01-01

123

Constructive interference among coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light $\\lambda$ to the transport mean free path length ls* of a random medium. In biological media, large ls* ~ 0.5-2 mm >> $\\lambda$ results in an extremely small (~0.001 $^\\degree$) angular width of the EBS cone making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lscphoton random walk model of LEBS using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in diffusion regime. We show that small exit angles are highly sensitive to low order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with experimental data.

Hariharan Subramanian; Prabhakar Pradhan; Young L. Kim; Yang Liu; Xu Li; Vadim Backman

2005-08-22

124

A new approach to hot particle dosimetry using a Monte Carlo transport code

A NEW APPROACH TO HOT PARTICLE DOSIMETRY USING A MONTE CARLO TRANSPORT CODE A Thesis by DONNA MARIE BUS CHE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE December 1989 Major Subject: Health Physics A NEW APPROACH TO HOT PARTICLE DOSIMETRY USING A MONTE CARLO TRANSPORT CODE A Thesis by DONNA MARIE BUSCHE Approved as to style and content by: John . Poston (Chair of Committee) F...

Busche, Donna Marie

2012-06-07

125

Photon transport in a time-dependent 3D region Aldo Belleni-Morante1

Photon transport in a time-dependent 3D region Aldo Belleni-Morante1 , Roberto Monaco2 , Sandra roberto.monaco@polito.it, sandra.pieraccini@polito.it Abstract Photon transport is studied that the time behaviour of V (t) is unknown and the photon distribution function is measured at a location far

Ceragioli, Francesca

126

A Photon Transport Problem with a Time-Dependent Point Source

A Photon Transport Problem with a Time-Dependent Point Source A Belleni-Morante Dipartimento di 2007 Abstract We consider a time-dependent problem of photon transport in an interstellar cloud with a point photon source modelled by a Dirac functional. The existence of a unique distributional solution

Mottram, Nigel

127

Fast perturbation Monte Carlo method for photon migration in heterogeneous turbid media

We present a two-step Monte Carlo (MC) method that is used to solve the radiative transfer equation in heterogeneous turbid media. The method exploits the one-to-one correspondence between the seed value of a random number generator and the sequence of random numbers. In the first step, a full MC simulation is run for the initial distribution of the optical properties and the “good” seeds (the ones leading to detected photons) are stored in an array. In the second step, we run a new MC simulation with only the good seeds stored in the first step, i.e., we propagate only detected photons. The effect of a change in the optical properties is calculated in a short time by using two scaling relationships. By this method we can increase the speed of a simulation up to a factor of 1300 in typical situations found in near-IR tissue spectroscopy and diffuse optical tomography, with a minimal requirement for hard disk space. Potential applications of this method for imaging of turbid media and the inverse problem are discussed. PMID:21633460

Sassaroli, Angelo

2012-01-01

128

The Photon Transport Transistor: a Novel Device for Optoelectronic Integration.

NASA Astrophysics Data System (ADS)

A theoretical model for a photon transport transistor was developed, based on a set of rate equations. At low bias conditions, the differential current gain is dominated by spontaneous emission of photons from the active region of the laser diode. It then decreases once stimulated emission becomes dominant and finally collapses to a small value at and above lasing threshold of the laser diode. The residual current gain above lasing is due to the absorption of scattered photons from the laser cavity. The photon transport transistors fabricated using both the AlGaAs/GaAs and the InP/InGaAs material systems were studied. In AlGaAs/GaAs material system, a maximum current gain of 0.84 was measured below lasing threshold. Above lasing, the current gain is only 0.13. Two methods were proposed to improve the current gain when biasing the device above lasing threshold. The first method was to coat the facets of the laser diode with highly reflectivity mirrors. After coating, the current gain was improved 2.7 times. In addition, the same technique can be used to extract the waveguide losses of a laser diode without requiring any optical calibration. The second method was to increase the optical coupling between the laser diode and the photodiode by merging the two devices. A measured maximum current gain of 1.35 and a calculated transit frequency of 1.9 GHz were obtained above lasing threshold when the photodiode was forward biased by 1 V. In addition, the output characteristics of the laser diode can be improved through photon recycling. This is the first device which is able to function as a laser diode and an electronic transistor under the same bias conditions. The first lattice-matched InP/InGaAs photon transport transistor was fabricated and characterized. This device consists of a multi-quantum well LED integrated on top of a photodiode. The center wavelength of the emission spectra of the LED is at 1.55 mm. The device fabricated has a voltage gain of 258, a current gain of 0.07 resulting a power gain of 18.1. In addition, it was demonstrated that the concept of this device is extendible to material systems other than AlGaAs/GaAs. (Abstract shortened by UMI.).

Chu, Ann-Kuo

129

The FERMI-Elettra FEL Photon Transport System

The FERMI-Elettra free electron laser (FEL) user facility is under construction at Sincrotrone Trieste (Italy), and it will be operative in late 2010. It is based on a seeded scheme providing an almost perfect transform-limited and fully spatially coherent photon beam. FERMI-Elettra will cover the wavelength range 100 to 3 nm with the fundamental harmonics, and down to 1 nm with higher harmonics. We present the layout of the photon beam transport system that includes: the first common part providing on-line and shot-to-shot beam diagnostics, called PADReS (Photon Analysis Delivery and Reduction System), and 3 independent beamlines feeding the experimental stations. Particular emphasis is given to the solutions adopted to preserve the wavefront, and to avoid damage on the different optical elements. Peculiar FEL devices, not common in the Synchrotron Radiation facilities, are described in more detail, e.g. the online photon energy spectrometer measuring shot-by-shot the spectrum of the emitted radiation, the beam splitting and delay line system dedicated to cross/auto correlation and pump-probe experiments, and the wavefront preserving active optics adapting the shape and size of the focused spot to meet the needs of the different experiments.

Zangrando, M. [Laboratorio TASC INFM-CNR, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Cudin, I.; Fava, C.; Godnig, R.; Kiskinova, M.; Masciovecchio, C.; Parmigiani, F.; Rumiz, L.; Svetina, C.; Turchet, A.; Cocco, D. [Sincrotrone Trieste SCpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy)

2010-06-23

130

Quantum Zeno switch for single-photon coherent transport

NASA Astrophysics Data System (ADS)

Using a dynamical quantum Zeno effect, we propose a general approach to control the coupling between a two-level system (TLS) and its surroundings, by modulating the energy-level spacing of the TLS with a high-frequency signal. We show that the TLS-surroundings interaction can be turned off when the ratio between the amplitude and the frequency of the modulating field is adjusted to be a zero of a Bessel function. The quantum Zeno effect of the TLS can also be observed by the vanishing of the photon reflection at these zeros. Based on these results, we propose a quantum switch to control the transport of a single photon in a one-dimensional waveguide. Our analytical results agree well with numerical results using Floquet theory.

Zhou, Lan; Yang, S.; Liu, Yu-Xi; Sun, C. P.; Nori, Franco

2009-12-01

131

Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

Frambati, S.; Frignani, M. [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 1616 Genova (Italy)

2012-07-01

132

Experimental test of Monte Carlo proton transport at grazing incidence in GEANT4, FLUKA and MCNPX

The ability of the Monte Carlo (MC) particle transport codes GEANT4.8.1 and GEANT4.8.2, FLUKA2006 and MCNPX2.4.0 to model proton transport at grazing incidence onto tungsten blocks has been tested and compared to experimental measurements. The test geometry consisted of a narrow proton beam of two energies, 98 MeV and 180 MeV, impinging on a thick tungsten alloy block at grazing

Peter Kimstrand; Nina Tilly; Anders Ahnesjö; Erik Traneus

2008-01-01

133

The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs

In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the

Enrico Sangiorgi; Pierpaolo Palestri; David Esseni; Claudio Fiegna; Luca Selmi

2008-01-01

134

Coupled Monte Carlo simulation of transient electron-phonon transport in nanoscale devices

Using a coupled Monte Carlo method for solving both electron and phonon Boltzmann transport equations, the transient electrothermal behaviors of nanoscale Si n-i-n device are simulated. The nonequilibrium optical phonon distribution is characterized by a temperature different from that of the acoustic phonons, and these two temperatures show different characteristics not only in the steady state, but also in transient

Yoshinari Kamakura; Nubuya Mori; Kenji Taniguchi; Tomofumi Zushi; Takanobu Watanabe

2010-01-01

135

Monte Carlo simulations of carrier transport in AlGaInP laser diodes

A self-consistent ensemble Monte Carlo simulation of charge transport in AlGaInP multiple quantum well lasers has been devised in an effort to understand why the light output from these lasers is reduced at high temperatures

G. C. Crow; R. A. Abram

1996-01-01

136

MC++: A parallel, portable, Monte Carlo neutron transport code in C++

MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A & M Univ., College Station, TX (United States)

1997-03-01

137

Radiative transport in fluorescence-enhanced frequency domain photon migration.

Small animal optical tomography has significant, but potential application for streamlining drug discovery and pre-clinical investigation of drug candidates. However, accurate modeling of photon propagation in small animal volumes is critical to quantitatively obtain accurate tomographic images. Herein we present solutions from a robust fluorescence-enhanced, frequency domain radiative transport equation (RTE) solver with unique attributes that facilitate its deployment within tomographic algorithms. Specifically, the coupled equations describing time-dependent excitation and emission light transport are solved using discrete ordinates (SN) angular differencing along with linear discontinuous finite-element spatial differencing on unstructured tetrahedral grids. Source iteration in conjunction with diffusion synthetic acceleration is used to iteratively solve the resulting system of equations. This RTE solver can accurately and efficiently predict ballistic as well as diffusion limited transport regimes which could simultaneously exist in small animals. Furthermore, the solver provides accurate solutions on unstructured, tetrahedral grids with relatively large element sizes as compared to commonly employed solvers that use step differencing. The predictions of the solver are validated by a series of frequency-domain, phantom measurements with optical properties ranging from diffusion limited to transport limited propagation. Our results demonstrate that the RTE solution consistently matches measurements made under both diffusion and transport-limited conditions. This work demonstrates the use of an appropriate RTE solver for deployment in small animal optical tomography. PMID:17278821

Rasmussen, John C; Joshi, Amit; Pan, Tianshu; Wareing, Todd; McGhee, John; Sevick-Muraca, Eva M

2006-12-01

138

The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k{sub pl} for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c{sub pl} and a chamber-dependent fluence correction factor h{sub pl} for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h{sub pl} values for the plane-parallel chambers were evaluated from the electron fluence correction factor {phi}{sub pl}{sup w} and wall correction factors P{sub wall,w} and P{sub wall,pl} for a combination of water or plastic materials. The calculated k{sub pl} and h{sub pl} values were verified by comparison with the measured values. A set of k{sub pl} values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k{sub pl} values also agreed within their combined uncertainty with the measured data. For electron beams, the c{sub pl} values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The {phi}{sub pl}{sup w} values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P{sub wall} between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h{sub pl} values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h{sub pl} values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry.

Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi [Department of Radiological Technology, Kumamoto University School of Health Sciences, 4-24-1, Kuhonji, Kumamoto, 862-0976 (Japan); Division of Radiation Oncology, Tokyo Women's Medical University Hospital, Tokyo, 162-8666 (Japan); Department of Central Radiology, Kinki University Hospital, Osaka, 589-8511 (Japan); Department of Central Radiology, Shinshu University Hospital, Matsumoto, 390-8621 (Japan)

2009-07-15

139

Monte Carlo simulation of small electron fields collimated by the integrated photon MLC.

In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15?MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1?mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433?m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3?mm was found. PMID:21242628

Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

2011-02-01

140

Response matrix Monte Carlo based on a general geometry local calculation for electron transport

A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs.

Ballinger, C.T.; Rathkopf, J.A. (Lawrence Livermore National Lab., CA (USA)); Martin, W.R. (Michigan Univ., Ann Arbor, MI (USA). Dept. of Nuclear Engineering)

1991-01-01

141

Purpose: To individually benchmark the incident electron parameters in a Monte Carlo model of an Elekta linear accelerator operating at 6 and 15 MV. The main objective is to establish a simplified but still precise benchmarking procedure that allows accurate dose calculations of advanced treatment techniques. Methods: The EGSnrc Monte Carlo user codes BEAMnrc and DOSXYZnrc are used for photon beam simulations and dose calculations, respectively. A 5 x 5 cm{sup 2} field is used to determine both the incident electron energy and the electron radial intensity. First, the electron energy is adjusted to match the calculated depth dose to the measured one. Second, the electron radial intensity is adjusted to make the calculated dose profile in the penumbrae region match the penumbrae measured by GafChromic EBT film. Finally, the mean angular spread of the incident electron beam is determined by matching calculated and measured cross-field profiles of large fields. The beam parameters are verified for various field sizes and shapes. Results: The penumbrae measurements revealed a non-circular electron radial intensity distribution for the 6 MV beam, while a circular electron radial intensity distribution could best describe the 15 MV beam. These electron radial intensity distributions, given as the standard deviation of a Gaussian distribution, were found to be 0.25 mm (in-plane) and 1.0 mm (cross-plane) for the 6 MV beam and 0.5 mm (both in-plane and cross-plane) for the 15 MV beam. Introducing a small mean angular spread of the incident electron beam has a considerable impact on the lateral dose profiles of large fields. The mean angular spread was found to be 0.7 deg. and 0.5 deg. for the 6 and 15 MV beams, respectively. Conclusions: The incident electron beam parameters in a Monte Carlo model of a linear accelerator could be precisely and independently determined by the benchmarking procedure proposed. As the dose distribution in the penumbra region is insensitive to moderate changes in electron energy and angular spread, accurate penumbra measurements is feasible for benchmarking the electron radial intensity distribution. This parameter is particularly important for accurate dosimetry of mlc-shaped fields and small fields.

Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore [Department of Physics, Norwegian University of Science and Technology, NO-7491 Trondheim (Norway) and Department of Oncology and Radiotherapy, St. Olavs University Hospital, NO-7006 Trondheim (Norway); Department of Oncology and Radiotherapy, St. Olavs University Hospital, NO-7006 Trondheim (Norway); Department of Oncology and Radiotherapy, Aalesund Hospital, NO-6026 Aalesund (Norway); Department of Physics, Norwegian University of Science and Technology, NO-7491 Trondheim (Norway)

2012-01-15

142

NASA Astrophysics Data System (ADS)

MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

2014-06-01

143

In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

Naff, R.L.; Haley, D.F.; Sudicky, E.A.

1998-01-01

144

to transport energy through the media. Aggregates of these individual photons carry information on the internal that there is a compromise between desired resolution and the effective scattering ratio at a detector. Keywords: light tissues1 . Tissues are highly scattering media for light, and the radiative transport equation is often

Chapman, Glenn H.

145

A Comparison of Monte Carlo Particle Transport Algorithms for Binary Stochastic Mixtures

Two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through a binary stochastic mixture are numerically compared using a standard set of planar geometry benchmark problems. In addition to previously-published comparisons of the ensemble-averaged probabilities of reflection and transmission, we include comparisons of detailed ensemble-averaged total and material scalar flux distributions. Because not all benchmark scalar flux distribution data used to produce plots in previous publications remains available, we have independently regenerated the benchmark solutions including scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically-realistic scalar flux distributions for the transport problems examined. The first algorithm reproduces the standard Levermore-Pomraning model results for the probabilities of reflection and transmission. The second algorithm generally produces significantly more accurate probabilities of reflection and transmission and also significantly more accurate total and material scalar flux distributions.

Brantley, P S

2009-02-23

146

Transport level in disordered organics: An analytic model and Monte-Carlo simulations

NASA Astrophysics Data System (ADS)

Transport level concept is known as a promising tool which provides great simplification in analytic description of hopping transport in organics. However, quantitative modeling of mobility and diffusion coefficient by the use of this concept is extremely rare up to the moment. Monte-Carlo modeling of transport level and related quantities in the framework of Gaussian disorder model is carried out in this work. Methodology of this modeling is discussed and physical essence of various approaches to transport level is clarified. It is shown that an analytic model, which considers the transport level as the average energy of states from which a carrier can be released by means of energetically upward and downward jumps with equal probability, is applicable for quantitative modeling of temperature dependence of mobility and coefficient of field-stimulated diffusion. Simple analytic expressions for these transport coefficients are obtained.

Nikitenko, V. R.; Strikhanov, M. N.

2014-02-01

147

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

2008-10-31

148

Transport in open spin chains: A Monte Carlo wave-function approach

We investigate energy transport in several two-level atom or spin-1/2 models by a direct coupling to heat baths of different temperatures. The analysis is carried out on the basis of a recently derived quantum master equation which describes the nonequilibrium properties of internally weakly coupled systems appropriately. For the computation of the stationary state of the dynamical equations, we employ a Monte Carlo wave-function approach. The analysis directly indicates normal diffusive or ballistic transport in finite models and hints toward an extrapolation of the transport behavior of infinite models.

Mathias Michel; Ortwin Hess; Hannu Wichterich; Jochen Gemmer

2008-03-07

149

Monte Carlo path sampling approach to modeling aeolian sediment transport

NASA Astrophysics Data System (ADS)

Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

Hardin, E. J.; Mitasova, H.; Mitas, L.

2011-12-01

150

The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in

Ndimofor Chofor; Dietrich Harder; Kay Willborn; Antje Rühmann; Björn Poppe

2011-01-01

151

NASA Astrophysics Data System (ADS)

We study the photon-photon correlation properties of two-photon transport in a one-dimensional waveguide coupled to a nonlinear cavity via a real-space approach. It is shown that the intrinsic dissipation of the nonlinear cavity has an important effect upon the correlation of the transported photons. More importantly, strongly correlated photons can be obtained in the transmitted photons even when the nonlinear interaction strength is weak in the cavity. The strong photon-photon correlation is induced by the Fano resonance involving destructive interference between the plane wave and bound state for two-photon transport.

Xu, Xun-Wei; Li, Yong

2014-09-01

152

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

153

Delocalization of electrons by cavity photons in transport through a quantum dot molecule

NASA Astrophysics Data System (ADS)

We present results on cavity-photon-assisted electron transport through two lateral quantum dots embedded in a finite quantum wire. The double quantum dot system is weakly connected to two leads and strongly coupled to a single quantized photon cavity mode with initially two linearly polarized photons in the cavity. Including the full electron-photon interaction, the transient current controlled by a plunger-gate in the central system is studied by using quantum master equation. Without a photon cavity, two resonant current peaks are observed in the range selected for the plunger gate voltage: The ground state peak, and the peak corresponding to the first-excited state. The current in the ground state is higher than in the first-excited state due to their different symmetry. In a photon cavity with the photon field polarized along or perpendicular to the transport direction, two extra side peaks are found, namely, photon-replica of the ground state and photon-replica of the first-excited state. The side-peaks are caused by photon-assisted electron transport, with multiphoton absorption processes for up to three photons during an electron tunneling process. The inter-dot tunneling in the ground state can be controlled by the photon cavity in the case of the photon field polarized along the transport direction. The electron charge is delocalized from the dots by the photon cavity. Furthermore, the current in the photon-induced side-peaks can be strongly enhanced by increasing the electron-photon coupling strength for the case of photons polarized along the transport direction.

Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

2014-11-01

154

Data decomposition of Monte Carlo particle transport simulations via tally servers

An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)

2013-11-01

155

Electron transport in radiotherapy using local-to-global Monte Carlo

Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ``steps`` to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.

Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A. [Lawrence Livermore National Lab., CA (United States); Ballinger, C.T. [Albany Medical Center, Albany, NY (United States). Dept. of Radiation Oncology; Neuenschwander, H. [Bern Univ. (Switzerland). Dept. of Medical Radiation Physics; Mackie, T.R.; Reckwerdt, P.J. [Univ. of Wisconsin-Madison, Madison, WI (United States)

1994-09-01

156

A Monte Carlo model for out-of-field dose calculation from high-energy photon therapy

As cancer therapy becomes more efficacious and patients survive longer, the potential for late effects increases, including effects induced by radiation dose delivered away from the treatment site. This out-of-field radiation is of particular concern with high-energy radiotherapy, as neutrons are produced in the accelerator head. We recently developed an accurate Monte Carlo model of a Varian 2100 accelerator using MCNPX for calculating the dose away from the treatment field resulting from low-energy therapy. In this study, we expanded and validated our Monte Carlo model for high-energy (18 MV) photon therapy, including both photons and neutrons. Simulated out-of-field photon doses were compared with measurements made with thermoluminescent dosimeters in an acrylic phantom up to 55 cm from the central axis. Simulated neutron fluences and energy spectra were compared with measurements using moderated gold foil activation in moderators and data from the literature. The average local difference between the calculated and measured photon dose was 17%, including doses as low as 0.01% of the central axis dose. The out-of-field photon dose varied substantially with field size and distance from the edge of the field but varied little with depth in the phantom, except at depths shallower than 3 cm, where the dose sharply increased. On average, the difference between the simulated and measured neutron fluences was 19% and good agreement was observed with the neutron spectra. The neutron dose equivalent varied little with field size or distance from the central axis but decreased with depth in the phantom. Neutrons were the dominant component of the out-of-field dose equivalent for shallow depths and large distances from the edge of the treatment field. This Monte Carlo model is useful to both physicists and clinicians when evaluating out-of-field doses and associated potential risks.

Kry, Stephen F.; Titt, Uwe; Followill, David; Poenisch, Falk; Vassiliev, Oleg N.; White, R. Allen; Stovall, Marilyn; Salehpour, Mohammad [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Medizinische Fakultaet Carl Gustav Carus, Technische Universitaet Dresden, Dresden, 01307 (Germany); Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Biostatistics and Applied Mathematics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States)

2007-09-15

157

Monte Carlo-drift-diffusion simulation of electron current transport in III-N LEDs

NASA Astrophysics Data System (ADS)

Performance of III-N based solid-state lighting is to a large extent limited by current transport effects that are also expected to contribute to the efficiency droop in real devices. To enable studying the contributions of electron transport in drooping more accurately, we develop and study a coupled Monte Carlo-drift-diffusion (MCDD) method to model the details of electron current transport in III-N optoelectronic devices. In the MCDD method, electron and hole distributions are first simulated by solving the standard drift-diffusion (DD) equations. The hole density and recombination rate density obtained from solving the DD equations are used as inputs in the Monte Carlo (MC) simulation of the electron system. The MC simulation involves solving the Boltzmann transport equation for the electron gas to accurately describe electron transport. As a hybrid of the DD and MC methods, the MCDD represents a first-order correction for electron transport in III-N LEDs as compared to DD, predicting a significant hot electron population in the simulated multi-quantum well (MQW) LED device at strong injection.

Kivisaari, Pyry; Sadi, Toufik; Oksanen, Jani; Tulkki, Jukka

2014-03-01

158

NASA Astrophysics Data System (ADS)

The variations of depth and surface dose on the bone heterogeneity and beam angle were compared between unflattened and flattened photon beams using Monte Carlo simulations. Phase-space files of the 6 MV photon beams with field size of 10×10 cm2 were generated with and without the flattening filter based on a Varian TrueBeam linac. Depth and surface doses were calculated in a bone and water phantoms using Monte Carlo simulations (the EGSnrc-based code). Dose calculations were repeated with angles of the unflattened and flattened beams turned from 0° to 15°, 30°, 45°, 60°, 75° and 90° in the bone and water phantoms. Monte Carlo results of depth doses showed that compared to the flattened beam the unflattened photon beam had a higher dose in the build-up region but lower dose beyond the depth of maximum dose. Dose ratios of the unflattened to flattened beams were calculated in the range of 1.6-2.6 with beam angle varying from 0° to 90° in water. Similar results were found in the bone phantom. In addition, higher surface doses of about 2.5 times were found with beam angles equal to 0° and 15° in the bone and water phantoms. However, surface dose deviation between the unflattened and flattened beams became smaller with increasing beam angle. Dose enhancements due to the bone backscatter were also found at the water-bone and bone-water interfaces for both the unflattened and flattened beams in the bone phantom. With Monte Carlo beams cross-calibrated to the monitor unit in simulations, variations of depth and surface dose on the bone heterogeneity and beam angle were investigated and compared using Monte Carlo simulations. For the unflattened and flattened photon beams, the surface dose and range of depth dose ratios (unflattened to flattened beam) decreased with increasing beam angle. The dosimetric comparison in this study is useful in understanding the characteristics of unflattened photon beam on the depth and surface dose with bone heterogeneity.

Chow, James C. L.; Owrangi, Amir M.

2014-08-01

159

Effective source term in the diffusion equation for photon transport in turbid media

Effective source term in the diffusion equation for photon transport in turbid media Sergio Fantini used to describe photon transport in turbid media. We have performed a series of spectroscopy experiments on a number of uniform turbid media with different optical properties absorption coefficient

160

Evaluation of a 50-MV Photon Therapy Beam from a Racetrack Microtron Using MCNP4B Monte Carlo Code

NASA Astrophysics Data System (ADS)

High energy photon therapy beam from the 50 MV racetrack microtron has been evaluated using the Monte Carlo code MCNP4B. The spatial and energy distribution of photons, radial and depth dose distributions in the phantom are calculated for the stationary and scanned photon beams from different targets. The calculated dose distributions are compared to the experimental data using a silicon diode detector. Measured and calculated depth-dose distributions are in fairly good agreement, within 2-3% for the positions in the range 2-30 cm in the phantom, whereas the larger discrepancies up to 10% are observed in the dose build-up region. For the stationary beams the differences in the calculated and measured radial dose distributions axe about 2-10%.

Gudowska, I.; Sorcini, B.; Svensson, R.

161

Monte Carlo simulations of carrier transport in AlGaInP laser diodes

A self-consistent ensemble Monte Carlo simulation of charge transport in AlGaInP quantum-well (QW) lasers has been developed in an effort to understand the temperature sensitivity of these devices. In particular, the lasing capability of a three-well design has been studied at 300 and 360 K. Although the electron and hole leakage currents are found to increase with the temperature, this

G. C. Crow; R. A. Abram

1997-01-01

162

A bounce-averaged Monte Carlo collision operator and ripple transport in a tokamak

A bounce-averaged Monte Carlo operator is presented that simulates bounce-averaged perturbative Lorentz pitch angle scattering of particles in toroidal plasmas, in particular a tokamak. In conjunction with bounce-averaged expressions for the deterministic motion, this operator allows a quick and inexpensive simulation on time scales long compared to a bounce time. An analytically tractable model of transport due to toroidal magnetic field ripple is described.

Albert, J.M.; Boozer, A.H.

1986-09-01

163

1 UPSCALING OF A DUAL-PERMEABILITY MONTE CARLO SIMULATION MODEL FOR CONTAMINANT TRANSPORT Centrale Paris and Supelec ABSTRACT The transport of radionuclides in fractured media plays a fundamental of modeling the contaminant transport in fractured media. However, within the framework of the performance

Paris-Sud XI, UniversitÃ© de

164

Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors ?(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size. PMID:23514734

Czarnecki, D; Zink, K

2013-04-21

165

Correlated few-photon transport in one-dimensional waveguides: Linear and nonlinear dispersions

We address correlated few-photon transport in one-dimensional waveguides coupled to a two-level system (TLS), such as an atom or a quantum dot. We derive exactly the single-photon and two-photon current (transmission) for linear and nonlinear (tight-binding sinusoidal) energy-momentum dispersion relations of photons in the waveguides and compare the results for the different dispersions. A large enhancement of the two-photon current for the sinusoidal dispersion has been seen at a certain transition energy of the TLS away from the single-photon resonances.

Roy, Dibyendu [Department of Physics, University of California, San Diego, La Jolla, California 92093-0319 (United States)

2011-04-15

166

Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers

NASA Astrophysics Data System (ADS)

A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C2H2F4, iso-C4H10 and SF6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C2H2F4/iso-C4H10/SF6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena.

Bošnjakovi?, D.; Petrovi?, Z. Lj; White, R. D.; Dujko, S.

2014-10-01

167

A VAX version of the coupled Monte Carlo transport codes HETC and MORSE-CGA

The three-dimensional Monte Carlo transport codes, HETC and MORSE-CGA, are distributed by the Radiation Shielding Information Center at Oak Ridge National Laboratory. These codes, written for IBM-3033 computers, have been installed on the Environmental Measurements Laboratory's VAX/11-750 computer for operation in a coupled mode to study the transport of neutrons over the energy range from thermal to several GeV. This report is a guide to their use on the VAX/11-750 computer. 26 refs., 6 figs., 14 tabs.

Sanna, R.S.

1990-12-01

168

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01

169

The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known. PMID:10765455

Pet-Armacost, J J; Sepulveda, J; Sakude, M

1999-12-01

170

NASA Astrophysics Data System (ADS)

An electron-photon coupled Monte Carlo code ARCHER -

Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

2014-06-01

171

Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study

Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.

Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon [Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744, Korea and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of) and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of) and Department of Intelligent Convergence Systems, Seoul National University, Seoul, 151-742 (Korea, Republic of)

2012-03-15

172

High-atomic-number materials may be used as intensity modulating filters for inverse radiation treatment planning with photon beams. Such filters, when placed in a bremsstrahlung beam, attenuate the primary fluence, but also produce scattered photons that will reach the patient. To account for such effects in the optimization of photon beam intensities a semiempirical method based on narrow and broad beam transmission measurements was used to quantify the number of scattered photons produced in these filters. The method was verified by performing analytical calculations based on first scatter and a Monte Carlo simulation in 6 and 18 MV photon beams. The resultant experimental transmission ratios agree with calculations by these methods within 2 per cent under the experimental conditions investigated. The semiempirical method can thus be used as a basis for preliminary decision-making to select the proper material for intensity modulating filters and can provide a fast method to perform independent quality checks of the calculation accuracy of dose planning systems. Change in beam penetration is of less concern when treatments of target volumes at smaller depths are of interest. A 10 g cm(-2) thick filter made of low-melting-point alloy produces a change in percentage depth dose of less than 2 per cent for depths larger than 10 cm independent of field size. Similarly the scatter correction modifies the dose distribution by less than 5-10 per cent in most cases. PMID:11049169

Mejaddem, Y; Hyödynmaa, S; Brahme, A

2000-10-01

173

The Monte Carlo simulation of the electron transport through air slabs is studied with four codes: PENELOPE, GEANT3, Geant4 and EGSnrc. Monoenergetic electron beams with energies 6, 12 and 18 MeV are considered to impinge on air slabs with thicknesses ranging from 10 to 100 cm. The angular and radial distributions of the transmitted electrons are used to make a

M. Vilches; S. Garcia-Pareja; R. Guerrero; M. Anguiano; A. M. Lallena

2008-01-01

174

Monte Carlo simulations of the particle transport in semiconductor detectors of fast neutrons

NASA Astrophysics Data System (ADS)

Several Monte Carlo all-particle transport codes are under active development around the world. In this paper we focused on the capabilities of the MCNPX code (Monte Carlo N-Particle eXtended) to follow the particle transport in semiconductor detector of fast neutrons. Semiconductor detector based on semi-insulating GaAs was the object of our investigation. As converter material capable to produce charged particles from the (n, p) interaction, a high-density polyethylene (HDPE) was employed. As the source of fast neutrons, the 239Pu-Be neutron source was used in the model. The simulations were performed using the MCNPX code which makes possible to track not only neutrons but also recoiled protons at all interesting energies. Hence, the MCNPX code enables seamless particle transport and no other computer program is needed to process the particle transport. The determination of the optimal thickness of the conversion layer and the minimum thickness of the active region of semiconductor detector as well as the energy spectra simulation were the principal goals of the computer modeling. Theoretical detector responses showed that the best detection efficiency can be achieved for 500 ?m thick HDPE converter layer. The minimum detector active region thickness has been estimated to be about 400 ?m.

Sedla?ková, Katarína; Za?ko, Bohumír; Šagátová, Andrea; Ne?as, Vladimír

2013-05-01

175

A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080

Hart, Vern P; Doyle, Timothy E

2013-09-01

176

A bone composition model for Monte Carlo x-ray transport simulations

In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.

Zhou Hu; Keall, Paul J.; Graves, Edward E. [Department of Radiation Oncology and Department of Molecular Imaging Program at Stanford, Stanford University, Stanford, California 94305 (United States)

2009-03-15

177

Monte Carlo Simulation Model of Energetic Proton Transport through Self-generated Alfvén Waves

NASA Astrophysics Data System (ADS)

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfvén waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfvén waves—the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfvén waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R.

2013-08-01

178

MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)

2013-08-15

179

Test of QEDPS: A Monte Carlo for the hard photon distributions in e+ e- annihilation proecss

The validity of a photon shower generator QEDPS has been examined in detail. This is formulated based on the leading-logarithmic renormalization equation for the electron structure function and it provides a photon shower along the initial e+-. The main interest in the present work is to test the reliability of the generator to describe a process accompanying hard photons which are detected. For this purpose, by taking the HZ production as the basic reaction, the total cross section and some distributions of the hard photons are compared between two cases that these photons come from either those generated by QEDPS or the hard process e+e- -> H Z gamma gamma. The comparison performed for the single and the double hard photon has shown a satisfactory agreement which demonstrated that the model is self-consistent.

Y. Kurihara; J. Fujimoto; T. Munehisa; Y. Shimizu

1996-03-14

180

Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

NASA Astrophysics Data System (ADS)

This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

Dureau, David; Poëtte, Gaël

2014-06-01

181

The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided.

Jacques, Steven L.

2014-01-01

182

Quantitative image reconstruction in single photon emission CT requires an accurate attenuation map of a cross section of an object. Several data acquisition geometries have been proposed to obtain the true attenuation map by means of gamma-ray transmission CT (TCT). In the transmission data scattered photons are sometimes measured and they reduce the accuracy of reconstructed TCT images. To investigate

K. Ogawa; Y. Kawamura; A. Kubo; T. Ichihara

1996-01-01

183

Estimation of scattered photons in gamma ray transmission CT using Monte Carlo simulations

Quantitative image reconstruction in single photon emission CT requires an accurate attenuation map of a cross section of an object. Several data acquisition geometries have been proposed to obtain the true attenuation map by means of gamma-ray transmission CT (TCT). In the transmission data scattered photons are sometimes measured and they reduce the accuracy of reconstructed TCT images. To investigate

K. Ogawa; Y. Kawamura; A. Kubo; T. Ichihara

1997-01-01

184

Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC

The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.

Sun, C. C. [Centre for Diploma Program, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka (Malaysia); You, A. H.; Wong, E. K. [Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka (Malaysia)

2010-07-07

185

Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs’ responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At ±1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level. PMID:21089762

Wang, Lilie L. W.; Klein, David; Beddar, A. Sam

2010-01-01

186

NASA Astrophysics Data System (ADS)

The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.

Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio

2012-03-01

187

Monte Carlo studies of the exit photon spectra and dose to a metal/phosphor portal imaging screen.

The energy spectra and the dose to a Cu plate/Gd2O2S phosphor portal imaging detector were investigated for monoenergetic incident beams of photons (1.25, 2, and 5 MeV). The Monte Carlo method was used to characterize the influence of the patient/detector geometry, detector material and design, and incident beam energy on the spectral distribution and the dose, at the imaging detector plane, of a photon beam scattered from a water phantom. The results show that radiation equilibrium is lost in the air gap and that, for the geometries studied, this effect led to a reduction in the exit dose of up to 40%. The finding that the effects of the air gap and field size are roughly complementary has led to the hypothesis that an equivalent field size concept may be used to account for intensity and spectral changes arising from air gap variations. The copper plate preferentially attenuates the low-energy scattered photons incident on it, while producing additional annihilation, bremsstrahlung, and scattered photons. As a result, the scatter spectra at the copper surface entrance of the detector differs significantly from that at the Cu/phosphor interface. In addition, the mean scattered photon energy at the interface was observed to be roughly 0.4 MeV higher than the corresponding effective energy for 2 MeV incident beams. A comparison of the dose to various detector materials showed that exit dosimetry errors of up to 24% will occur if it is assumed that the Cu plate/Gd2O2S phosphor detector is water equivalent. PMID:10718136

Yeboah, C; Pistorius, S

2000-02-01

188

NASA Astrophysics Data System (ADS)

To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.

Müller, Florian; Jenny, Patrick; Daniel, Meyer

2014-05-01

189

Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H2

NASA Astrophysics Data System (ADS)

This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n0 are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n0 the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H2 under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.

Bankovi?, A.; Dujko, S.; White, R. D.; Buckman, S. J.; Petrovi?, Z. Lj.

2012-05-01

190

Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries

Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).

Guillaume Bal; Ian Langmore

2011-04-13

191

Adjoint Monte Carlo methods for coupled transport are developed. The phase-space is extended by the introduction of an additional discrete coordinate (particle type of so-called generalized particle). The generalized particle concept allows the treatment of the transport of mixed radiation as a process with only one particle outgoing from a collision regardless of the physical picture of the interaction. In addition to the forward equation for the generalized particle, the adjoint equation is also derived. The proposed concept is applied to the adjoint equation of the coupled gamma-ray-electron-positron transport. Charged particle transport is considered in continuous slowing down approximation and Moliere's theory of multiple scattering, for which special adjoint sampling methods are suggested. A new approach to simulation of fixed-energy secondary radiation is implemented into the generalized particle concept. This approach performs fixed-energy secondary radiation simulation as the local energy estimator through the intermediate state with fixed energy. A comparison of forward and adjoint calculations for energy absorption shows the same results for radionuclide energies with and without electron equilibrium. Adjoint methods show greater efficiency in thin slabs.

Borisov, N.M. [Thomas Jefferson University (United States); Panin, M.P. [Moscow Engineering Physics Institute (State University) (Russian Federation)

2005-07-15

192

Radiation-induced "zero-resistance state" and the photon-assisted transport.

We demonstrate that the radiation-induced "zero-resistance state" observed in a two-dimensional electron gas is a result of the nontrivial structure of the density of states of the systems and the photon-assisted transport. A toy model of a quantum tunneling junction with oscillatory density of states in leads catches most of the important features of the experiments. We present a generalized Kubo-Greenwood conductivity formula for the photon-assisted transport in a general system and show essentially the same nature of the transport anomaly in a uniform system. PMID:14525265

Shi, Junren; Xie, X C

2003-08-22

193

3D electro-thermal Monte Carlo study of transport in confined silicon devices

NASA Astrophysics Data System (ADS)

The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non-equilibrium carrier dynamics, a key determinant in device performance. The dependence of short-channel effects and Joule heating on the lateral scaling of the cross-section is specifically explored in this work. Finally, this dissertation studies the basic tradeoff between various n-channel multigate architectures with square cross-sectional lengths ranging from 30 nm to 5 nm are presented.

Mohamed, Mohamed Y.

194

Purpose: To commission Monte Carlo beam models for five Varian megavoltage photon beams (4, 6, 10, 15, and 18 MV). The goal is to closely match measured dose distributions in water for a wide range of field sizes (from 2x2 to 35x35 cm{sup 2}). The second objective is to reinvestigate the sensitivity of the calculated dose distributions to variations in the primary electron beam parameters. Methods: The GEPTS Monte Carlo code is used for photon beam simulations and dose calculations. The linear accelerator geometric models are based on (i) manufacturer specifications, (ii) corrections made by Chibani and Ma [''On the discrepancies between Monte Carlo dose calculations and measurements for the 18 MV Varian photon beam,'' Med. Phys. 34, 1206-1216 (2007)], and (iii) more recent drawings. Measurements were performed using pinpoint and Farmer ionization chambers, depending on the field size. Phase space calculations for small fields were performed with and without angle-based photon splitting. In addition to the three commonly used primary electron beam parameters (E{sub AV} is the mean energy, FWHM is the energy spectrum broadening, and R is the beam radius), the angular divergence ({theta}) of primary electrons is also considered. Results: The calculated and measured dose distributions agreed to within 1% local difference at any depth beyond 1 cm for different energies and for field sizes varying from 2x2 to 35x35 cm{sup 2}. In the penumbra regions, the distance to agreement is better than 0.5 mm, except for 15 MV (0.4-1 mm). The measured and calculated output factors agreed to within 1.2%. The 6, 10, and 18 MV beam models use {theta}=0 deg., while the 4 and 15 MV beam models require {theta}=0.5 deg. and 0.6 deg., respectively. The parameter sensitivity study shows that varying the beam parameters around the solution can lead to 5% differences with measurements for small (e.g., 2x2 cm{sup 2}) and large (e.g., 35x35 cm{sup 2}) fields, while a perfect agreement is maintained for the 10x10 cm{sup 2} field. The influence of R on the central-axis depth dose and the strong influence of {theta} on the lateral dose profiles are demonstrated. Conclusions: Dose distributions for very small and very large fields were proved to be more sensitive to variations in E{sub AV}, R, and {theta} in comparison with the 10x10 cm{sup 2} field. Monte Carlo beam models need to be validated for a wide range of field sizes including small field sizes (e.g., 2x2 cm{sup 2}).

Chibani, Omar; Moftah, Belal; Ma, C.-M. Charlie [Department of Biomedical Physics, King Faisal Specialist Hospital and Research Center, Riyadh 11211 (Saudi Arabia) and Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States); Department of Biomedical Physics, King Faisal Specialist Hospital and Research Center, Riyadh 11211 (Saudi Arabia); Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)

2011-01-15

195

Monte Carlo based dose calculation algorithms require input data or distributions describing the phase space of the photons and secondary electrons prior to the patient-dependent part of the beam-line geometry. The accuracy of the treatment plan itself is dependent upon the accuracy of this distribution. The purpose of this work is to compare phase space distributions (PSDs) generated with the

J. V. Siebers; P. J. Keall; B. Libby; R. Mohan

1999-01-01

196

Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

Shi, C. Y.; Xu, X. George; Stabin, Michael G. [Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas 78229 (United States); Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Room 1-11, NES Building, Tibbits Avenue, Troy, New York 12180 (United States); Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37232-2675 (United States)

2008-07-15

197

Interface dosimetry: measurements and Monte Carlo simulations of low-energy photon beams

A comparison of measured and simulated dose perturbations at high-Z interfaces with Monte Carlo (MC) codes, EGS4, MCNP4B, and PENELOPE, having varied algorithms is presented. The measured dose perturbations strongly depend on the chamber design and are always lower than the MC data. The EGS4 data are closer to the ion chamber values. The other two codes, MCNP4B and PENELOPE,

Indra J. Das; Alireza Kassaee; Frank Verhaegen; Vadim P. Moskvin

2001-01-01

198

Interface dosimetry: measurements and Monte Carlo simulations of low-energy photon beams

A comparison of measured and simulated dose perturbations at high-\\/Z interfaces with Monte Carlo (MC) codes, EGS4, MCNP4B, and PENELOPE, having varied algorithms is presented. The measured dose perturbations strongly depend on the chamber design and are always lower than the MC data. The EGS4 data are closer to the ion chamber values. The other two codes, MCNP4B and PENELOPE,

I. J. Das; A. Kassaee; F. Verhaegen; V. P. Moskvin

2001-01-01

199

Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport

NASA Astrophysics Data System (ADS)

Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.

Manalo, K.; Chin, M.; Sjoden, G.

2014-06-01

200

A detailed Monte Carlo accounting of radiation transport in the brain during BNCT.

The collision type central to BNCT is (10)B(n, alpha)(7)Li, however, other types of nuclear reactions also take place in the patient. In addition to the major elements (H, C, N, O), minor elements such as Na, Mg, P, S, Cl, K, Ca and Fe present in body tissues also interact in neutron collisions. Detailed accounting of the above not only provides a better understanding of radiation transport in the human body during BNCT, but such knowledge affects the design of the facility, as well as treatment planning, imaging and verification for a given BNCT agent. Of the methods of investigation currently available, only Monte Carlo simulation could provide the detailed accounting and breakdown of the quantities required. We report Monte Carlo simulation of an anthropomorphic voxel phantom, the VIP-Man and show how these quantities change with different (10)B concentrations in the tumour, the blood and the remaining tissues. The (10)B biodistribution has been chosen to be the variable of interest, since it is not accurately known, is frequently approximated and is a crucial quantity upon which dose calculations are based. PMID:19380231

Chin, M P W; Spyrou, N M

2009-07-01

201

NASA Astrophysics Data System (ADS)

Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes <=5.5 cm2 and densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

Ulmer, W.; Pyyry, J.; Kaissl, W.

2005-04-01

202

NASA Astrophysics Data System (ADS)

Nuclear heating evaluation by Monte-Carlo simulation requires coupled neutron-photon calculation so as to take into account the contribution of secondary photons. Nuclear data are essential for a good calculation of neutron and photon energy deposition and for secondary photon generation. However, a number of isotopes of the most common nuclear data libraries happen to be affected by energy and/or momentum conservation errors concerning the photon production or inaccurate thresholds for photon emission sections. In this paper, we perform a comprehensive survey of the three evaluations JEFF3.1.1, JEFF3.2T2 (beta version) and ENDF/B-VII.1, over 142 isotopes. The aim of this survey is, on the one hand, to check the existence of photon production data by neutron reaction and, on the other hand, to verify the consistency of these data using the kinematic limits method recently implemented in the TRIPOLI-4 Monte-Carlo code, developed by CEA (Saclay center). Then, the impact of these inconsistencies affecting energy deposition scores has been estimated for two materials using a specific nuclear heating calculation scheme in the context of the OSIRIS Material Testing Reactor (CEA/Saclay).

Péron, A.; Malouch, F.; Zoia, A.; Diop, C. M.

2014-06-01

203

Improved cache performance in Monte Carlo transport calculations using energy banding

NASA Astrophysics Data System (ADS)

We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

2014-04-01

204

Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code

Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.

O'Brien, M J; Joy, K I; Procassini, R J; Greenman, G M

2008-12-07

205

NASA Astrophysics Data System (ADS)

Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

2014-06-01

206

Verification of output factors for small photon beams using Monte Carlo methods

In order to achieve acceptable radiosurgery delivery with small photon beams, it is important to know the radiation characteristics of the beams produced. One problem is that the finite volume of standard detectors (ion chambers, solid state detectors) causes uncertainties in measurements of parameters for small beams. This investigation presents a computational verification of measurements of dosimetric parameters of small

Nikita V. Bezrukiy; John J. DeMarco; Indrin Chetty; J. B. Smathers; Timothy D. Solberg

2000-01-01

207

Stochastic analyses and Monte Carlo simulations were conducted for nonergodic transport of a nonreactive solute plume in three-dimensional heterogeneous and statistically anisotropic aquifers under uniform mean flow along the x axis. The hydraulic conductivity, K(x), is modeled as a random field which is assumed to be lognormally distributed with an anisotropic exponential covariance. The simulation model is validated with good

You-Kuan Zhang; Byong-min Seo

2004-01-01

208

NASA Astrophysics Data System (ADS)

A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).

Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

2011-07-01

209

NASA Astrophysics Data System (ADS)

The mesoscopic transport through a toroidal carbon nanotube (TCN) system applied with ac fields to the electrodes has been investigated by employing the nonequilibrium Green's function (NGF) technique. The Landauer-Büttiker-like formula is presented for numerical calculations of differential conductance and tunneling current. The conductance resonance takes place due to the electrons resonating in the quantum levels of TCN and side-band caused by the external ac fields. The photon-assisted transport can be observed both in conductance oscillation and current oscillation with respect to the magnetic flux. The side peaks and current suppressions are the main effect of photon absorption and emission in the transport procedure. The stair-like current-voltage characteristics are resulted from the quantum nature of TCN and applied microwave fields. The photon-electron pumping effect can be obtained by applying the microwave fields to the leads.

Zhao, Hong-Kang; Wang, Jian

2004-05-01

210

Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons

NASA Astrophysics Data System (ADS)

We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 ?m, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.

Mei, S.; Maurer, L. N.; Aksamija, Z.; Knezevic, I.

2014-10-01

211

NASA Astrophysics Data System (ADS)

Interface roughness strongly influences the performance of germanium metal-organic-semiconductor field effect transistors (MOSFETs). In this paper, a 2D full-band Monte Carlo simulator is used to study the impact of interface roughness scattering on electron and hole transport properties in long- and short- channel Ge MOSFETs inversion layers. The carrier effective mobility in the channel of Ge MOSFETs and the in non-equilibrium transport properties are investigated. Results show that both electron and hole mobility are strongly influenced by interface roughness scattering. The output curves for 50 nm channel-length double gate n and p Ge MOSFET show that the drive currents of n- and p-Ge MOSFETs have significant improvement compared with that of Si n- and p-MOSFETs with smooth interface between channel and gate dielectric. The 82% and 96% drive current enhancement are obtained for the n- and p-MOSFETs with the completely smooth interface. However, the enhancement decreases sharply with the increase of interface roughness. With the very rough interface, the drive currents of Ge MOSFETs are even less than that of Si MOSFETs. Moreover, the significant velocity overshoot also has been found in Ge MOSFETs.

Du, Gang; Liu, Xiao-Yan; Xia, Zhi-Liang; Yang, Jing-Feng; Han, Ru-Qi

2010-05-01

212

One-dimensional hopping transport in disordered organic solids. II. Monte Carlo simulations

NASA Astrophysics Data System (ADS)

Drift mobility of charge carriers in strongly anisotropic disordered organic media is studied by Monte Carlo computer simulations. Results for the nearest-neighbor hopping are in excellent agreement with those of the analytic theory (Cordes et al., preceding paper). It is widely believed that the low-field drift mobility in disordered organic solids has the form ?~exp[-(T0/T)2] with characteristic temperature T0 depending solely on the scale of the energy distribution of localized states responsible for transport. Taking into account electron transitions to more distant sites than the nearest neighbors, we show that this dependence is not universal and parameter T0 depends also on the concentration of localized states and on the decay length of the electron wave function in localized states. The results of computer simulation evidence that correlations in the distribution of localized states influence essentially not only the field dependence as known from the literature, but also the temperature dependence of the drift mobility. In particular, strong space-energy correlations diminish the role of long-range hopping transitions in the charge carrier transport.

Kohary, K.; Cordes, H.; Baranovskii, S. D.; Thomas, P.; Yamasaki, S.; Hensel, F.; Wendorff, J.-H.

2001-03-01

213

Atomistic Monte Carlo simulations of heat transport in Si and SiGe nanostructured materials

NASA Astrophysics Data System (ADS)

Efficient thermoelectric energy conversion depends on the design of materials with low thermal conductivity and/or high electrical conductivity and Seebeck coefficient [1]. Semiconducting nanostructured materials are promising candidates to exhibit high thermoelectric efficiency, as they may have much lower thermal conductivity than their bulk counterparts [1]. Atomistic simulations capable of handling large samples and describing accurately phonon dispersions and lifetimes at the nanoscale could greatly advance our understanding of heat transport in such materials [2]. We will present an atomistic Monte Carlo method to solve the Boltzmann transport equation [3] that enables the computation of the thermal conductivity of large systems with both empirical and first principles Hamiltonians (e.g. up to several thousand atoms in the case of Tersoff potentials). We will demonstrate how this new approach allows one to rationalize trends in the thermal conductivity of a range of Si and SiGe based nanostructures, as a function of size, dimensionality and morphology [3]. [1] See e.g. A. J. Minnich et al. Energy Environ. Sci. 2, 466 (2009). [2] Y. He, I. Savic, D. Donadio, and G. Galli, accepted in Phys. Chem. Chem. Phys. [3] I. Savic, D. Donadio, F. Gygi, and G. Galli, submitted.

Savic, Ivana; Donadio, Davide; Murray, Eamonn; Gygi, Francois; Galli, Giulia

2013-03-01

214

Monte Carlo estimation of neoclassical transport for the TJ-II stellarator

NASA Astrophysics Data System (ADS)

The neoclassical transport properties of TJ-II stellarator [C. Alejaldre et al., Fusion Technol. 13, 521 (1988)] are studied with the monoenergetic Monte Carlo technique. A compromise between the number of modes and particles and the required computing time to obtain reliable estimates, from the computational point of view, of the monoenergetic diffusion coefficients is shown to be of one thousand particles and one hundred harmonics, because of the rich magnetic-field structure of TJ-II. Although, these requirements are probably too demanding in making the transport estimations. The data base containing the normalized monoenergetic diffusion coefficient for several radial positions, radial electric fields and collisionalities have been fitted using a neural network. This fit reduces the number of points necessary in the data base and allows a smooth interpolation and extrapolation to perform the convolutions of the monoenergetic coefficients with the Maxwellian. For two different typical TJ-II discharges the ambipolar radial electric field, and the neoclassical particle and heat fluxes are presented both showing rather large positive radial electric fields at the plasma core and small negative fields at the edge. The neoclassical particle and energy confinement time are in surprisingly good agreement with the experimental energy balance analysis and the international stellarator scaling. Although no satisfactory explanation is available yet the large neoclassical diffusion caused by the complex ripple structure of TJ-II magnetic field may be an important ingredient.

Tribaldos, V.

2001-04-01

215

FA-119 On-Farm Transport of Ornamental Fish 1 Tina C. Crosby, Jeffrey E. Hill, Carlos V. Martinez Figure 2. A transportation vehicle Credits: Tina Crosby 2004 #12;2On-Farm Transport of Ornamental Fish and transport of fish will affect survival and overall quality of the fish (see UF IFAS Circular 919 Stress

Watson, Craig A.

216

In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 mm × 3 mm × 20 mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807

Vinke, Ruud; Olcott, Peter D; Cates, Joshua W; Levin, Craig S

2014-10-21

217

NASA Astrophysics Data System (ADS)

In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 mm × 3 mm × 20 mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time.

Vinke, Ruud; Olcott, Peter D.; Cates, Joshua W.; Levin, Craig S.

2014-10-01

218

Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the Monte Carlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these Monte Carlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the Monte Carlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these Monte Carlo algorithms has been investigated.

Brantley, P S

2009-06-30

219

There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations. PMID:25162421

Ali, Fawaz; Waller, Ed

2014-10-01

220

EGS4. Electron-Gamma Shower Monte Carlo Code

EGS4 (Electron-Gamma Shower) is a general purpose Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies above a few keV up to several TeV. The radiation transport of electrons or photons can be simulated in any element, compound, or mixture. The following physics processes can be taken into account: bremsstrahlung

1989-01-01

221

NASA Astrophysics Data System (ADS)

Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

Romano, Paul Kollath

222

MCNP: Photon benchmark problems

The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-09-01

223

Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)

2011-04-15

224

Suppression of population transport and control of exciton distributions by entangled photons

Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty. PMID:23653194

Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul

2013-01-01

225

Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle

NASA Astrophysics Data System (ADS)

A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).

de Bellefon, G. M.; Wirth, B. D.

2011-06-01

226

Monte Carlo model of neutral-particle transport in diverted plasmas

The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

1981-11-01

227

Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10×10 down to 0.5×0.5 cm2 for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5×0.5 cm2 photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs’ responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10×10 to 0.5×0.5 cm2, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%–2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source. PMID:21520871

Wang, Lilie L. W.; Beddar, Sam

2011-01-01

228

Direct photon emission in Heavy Ion Collisions from Microscopic Transport Theory and Fluid Dynamics

Direct photon emission in heavy-ion collisions is calculated within a relativistic micro+macro hybrid model and compared to the microscopic transport model UrQMD. In the hybrid approach, the high-density part of the collision is calculated by an ideal 3+1-dimensional hydrodynamic calculation, while the early (pre-equilibrium-) and late (rescattering-) phase are calculated with the transport model. Different scenarios of the transition from the macroscopic description to the transport model description and their effects are studied. The calculations are compared to measurements by the WA98-collaboration and predictions for the future CBM-experiment are made.

Bjoern Baeuchle; Marcus Bleicher

2010-03-29

229

NASA Astrophysics Data System (ADS)

This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.

Chow, James C. L.; Jiang, Runqing

2012-06-01

230

This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle(3) treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (d(max)) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose. PMID:22642985

Chow, James C L; Jiang, Runqing

2012-06-21

231

Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam

Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135

Atarod, Maryam; Shokrani, Parvaneh

2013-01-01

232

Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam.

Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of D max in out of field should be used for in-plane profile measurement. PMID:24083135

Atarod, Maryam; Shokrani, Parvaneh

2013-01-01

233

The physics of electron transport in Si and GaAs is investigated with use of a Monte Carlo technique which improves the ``state-of-the-art'' treatment of high-energy carrier dynamics. (1) The semiconductor is modeled beyond the effective-mass approximation by using the band structure obtained from empirical-pseudopotential calculations. (2) The electron-phonon, electron-impurity, and electron-electron scattering rates are computed in a way consistent with

Massimo V. Fischetti; Steven E. Laux

1988-01-01

234

A detailed Monte Carlo N-Particle Transport Code (MCNP5) model of the University of Missouri research reactor (MURR) has been developed. The ability\\u000a of the model to accurately predict isotope production rates was verified by comparing measured and calculated neutron-capture\\u000a reaction rates for numerous isotopes. In addition to thermal (1\\/v) monitors, the benchmarking included a number of isotopes\\u000a whose (n, ?)

N. J. Peters; J. D. Brockman; J. D. Robertson

2009-01-01

235

We show that Monte Carlo simulations of neutral particle transport in planar-geometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new “Boltzmann Monte Carlo” (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of

Taro Ueki; Edward W Larsen

1998-01-01

236

Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381

Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B

2014-10-01

237

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.

Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

2014-10-01

238

Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method

NASA Astrophysics Data System (ADS)

We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail the dependence of the results on the numbers of radial, angular, and energy grid points and the way to discretize the spatial advection term which is used in the Boltzmann solver. A general relativistic calculation has been done for one of the models. We find good overall agreement between the two methods. This gives credibility to both methods which are based on completely different formulations. In particular, the number and energy fluxes and the mean energies of the neutrinos show remarkably good agreement, because these quantities are determined in a region where the angular distribution of the neutrinos is nearly isotropic and they are essentially frozen in later on. On the other hand, because of a relatively small number of angular grid points (which is inevitable due to limitations of the computation time) the Boltzmann solver tends to slightly underestimate the flux factor and the Eddington factor outside the (mean) ``neutrinosphere'' where the angular distribution of the neutrinos becomes highly anisotropic. As a result, the neutrino number (and energy) density is somewhat overestimated in this region. This fact suggests that the Boltzmann solver should be applied to calculations of the neutrino heating in the hot-bubble region with some caution because there might be a tendency to overestimate the energy deposition rate in disadvantageous situations. A comparison shows that this trend is opposite to the results obtained with a multi-group flux-limited diffusion approximation of neutrino transport. Employing three different flux limiters, we find that all of them lead to a significant underestimation of the neutrino energy density in the semitransparent regime, and thus must yield too low values for the net neutrino heating (heating minus cooling) in the hot-bubble region. The accuracy of the Boltzmann solver can be improved by using a variable angular mesh to increase the angular resolution in the region where the neutrino distribution becomes anisotropic.

Yamada, Shoichi; Janka, Hans-Thomas; Suzuki, Hideyuki

1999-04-01

239

A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)

Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2013-07-01

240

Quantum transport of strongly interacting photons in a one-dimensional nonlinear waveguide

We present a theoretical technique for solving the quantum transport problem of a few photons through a one-dimensional, strongly nonlinear waveguide. We specifically consider the situation where the evolution of the optical field is governed by the quantum nonlinear Schr\\"odinger equation (NLSE). Although this kind of nonlinearity is quite general, we focus on a realistic implementation involving cold atoms loaded in a hollow-core optical fiber, where the atomic system provides a tunable nonlinearity that can be large even at a single-photon level. In particular, we show that when the interaction between photons is effectively repulsive, the transmission of multi-photon components of the field is suppressed. This leads to anti-bunching of the transmitted light and indicates that the system acts as a single-photon switch. On the other hand, in the case of attractive interaction, the system can exhibit either anti-bunching or bunching, which is in stark contrast to semiclassical calculations. We show that the bunching behavior is related to the resonant excitation of bound states of photons inside the system.

Mohammad Hafezi; Darrick Chang; Vladimir Gritsev; Eugene Demler; Mikhail Lukin

2009-11-25

241

Normal and anomalous transport across an interface: Monte Carlo and analytical approach

We present a Monte Carlo scheme to simulate particles going across an interface separating two layers of a medium characterized by different physical properties, together with an analytical formulation of the same problem, for both normal diffusive and subdiffusive regimes. We relate the Monte Carlo simulation parameters to the coefficients and boundary conditions appearing in the companion analytical equations. Under

M. Marseguerra; A. Zoia

2006-01-01

242

Update on the Status of the FLUKA Monte Carlo Transport Code

NASA Technical Reports Server (NTRS)

The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.

Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.

2004-01-01

243

Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727

Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R

2014-04-21

244

NASA Astrophysics Data System (ADS)

Conventional formulations of changes in cosmogenic nuclide production rates with snow cover are based on a mass-shielding approach, which neglects the role of neutron moderation by hydrogen. This approach can produce erroneous correction factors and add to the uncertainty of the calculated cosmogenic exposure ages. We use a Monte Carlo particle transport model to simulate fluxes of secondary cosmic-ray neutrons near the surface of the Earth and vary surface snow depth to show changes in neutron fluxes above rock or soil surface. To correspond with shielding factors for spallation and low-energy neutron capture, neutron fluxes are partitioned into high-energy, epithermal and thermal components. The results suggest that high-energy neutrons are attenuated by snow cover at a significantly higher rate (shorter attenuation length) than indicated by the commonly-used mass-shielding formulation. As thermal and epithermal neutrons derive from the moderation of high-energy neutrons, the presence of a strong moderator such as hydrogen in snow increases the thermal neutron flux both within the snow layer and above it. This means that low-energy production rates are affected by snow cover in a manner inconsistent with the mass-shielding approach and those formulations cannot be used to compute snow correction factors for nuclides produced by thermal neutrons. Additionally, as above-ground low-energy neutron fluxes vary with snow cover as a result of reduced diffusion from the ground, low-energy neutron fluxes are affected by snow even if the snow is at some distance from the site where measurements are made.

Zweck, Christopher; Zreda, Marek; Desilets, Darin

2013-10-01

245

The input/output characteristics of coherent photon transport through a semiconductor cavity system containing a single quantum dot is presented. The nonlinear quantum optics formalism uses a master equation approach and focuses on a waveguide-cavity system containing a semiconductor quantum dot; our general technique also applies to studying coherent reflection from a micropillar cavity. We investigate the effects of light propagation and show the need for quantized multiphoton effects for various dot-cavity systems, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down---even in the weak coupling regime. As a measure of the photon correlations, we compute the Fano factor and the error associated with making a semiclassical approximation. We also investigate the role of electron--acoustic-phonon scattering and show that phonon-mediated scatt...

Hughes, S

2011-01-01

246

Monte Carlo-based inverse treatment planning.

A Monte Carlo based inverse treatment planning system (MCI) has been developed which combines arguably the most accurate dose calculation method (Monte Carlo particle transport) with a 'guaranteed' optimization method (simulated annealing). A distribution of photons is specified in the tumour volume; they are transported using an adjoint calculation method to outside the patient surface to build up an intensity distribution. This intensity distribution is used as the initial input into an optimization algorithm. The dose distribution from each beam element from a number of fields is pre-calculated using Monte Carlo transport. Simulated annealing optimization is then used to find the weighting of each beam element, to yield the optimal dose distribution for the given criteria and constraints. MCI plans have been generated in various theoretical phantoms and patient geometries. These plans show conformation of the dose to the target volume and avoidance of critical structures. To verify the code, an experiment was performed on an anthropomorphic phantom. PMID:10473202

Jeraj, R; Keall, P

1999-08-01

247

Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas

Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.

Hoeoek, J.; Hellsten, T. [Fusion Plasma Physics, School of Electrical Engineering, Royal Institute of Technology (KTH), SE-100 44, Stockholm, Association VR-Euratom (Sweden)

2009-11-26

248

NASA Astrophysics Data System (ADS)

Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle differential basis with common cross-section libraries.

Bahadori, Amir Alexander

249

ITS Version 4.0: Electron/photon Monte Carlo transport codes

The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.

Halbleib, J.A,; Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

1995-07-01

250

Monte Carlo Simulation or Photon Transport Coupled to CAD Object Description

In transmission radiography, information about an object is obtained by irradiating the object and recording the transmitted radiation. The recorded radiation consists of a primary and a scattered component. Conventional models count only for the primary component carrying the information about the object structure. The scattered radiation is considered as a homogeneous background and described as a built-up. But for

M. Zhukovsky; S. Podoliako; G.-R. Tillack; C. Bellon

2004-01-01

251

Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09

252

NASA Astrophysics Data System (ADS)

Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.

Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie

2012-09-01

253

We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature. PMID:23406093

Schaefer, C; Jansen, A P J

2013-02-01

254

We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

Schaefer, C.; Jansen, A. P. J. [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

2013-02-07

255

NASA Astrophysics Data System (ADS)

To calibrate and validate tank experiments of macrodispersion in density-dependent flow within a stochastically heterogeneous medium performed in a 10m long, 1.2m high and 0.1m wide Plexiglas tank at the University of Kassel over the last few years, numerous Monte Carlo simulations using the SUTRA density-dependent flow and transport model have been performed. Objective of this ongoing long-term study is the analysis of the effects of the stochastic properties of the porous medium on the steady-state macrodispersion, particularly, the transversal dispersion. The tank experiments have been set up to mimic density dependent flow under hydrodynamically stable conditions (horizontally stratified flow, whereby saltwater is injected horizontally into freshwater in the lower half of the tank). Numerous experiments with saltwater concentrations ranging from c_0 = 250 (fresh water) to c_0 =100000 ppm and three inflow velocities of u = 1,4 and 8 m/day each are carried out for three stochastic, anisotropically packed sand structures with different mean K_g, variance ?2, and horizontal and vertical correlation lengths ?_x, ?_z for the permeability variations. For each flow and transport experiment carried out in one tankpack, a large number of Monte Carlo simulations with stochastic realizations taken from the corresponding statistical family (with predefined K_g, ?2, ?_x, ?_z) are simulated under steady-state conditions. From moment analyses and laterals widths of the simulated saltwater plume, variances ?_D2 of lateral dispersion are calculated as a function of horizontal distance x from the tank inlet. Using simple square root regression analysis of ?_D2(x), an expectation value for the transversal dispersivity E(A_T) is then computed which should be representative for the particular medium family and the given flow conditions. One issue of particular interest concerns the number N of Monte Carlo simulations reqired to get an asymptotically stable value E(?_D2) or E(A_T). Although this number depends essentially on the variance ?2 of the heterogeneous medium, increasing with the latter, we find out that N = O(100), i.e. an order of magnitude less than what has been found in previously published Monte Carlo simulations of tracer-type macrodispersion in stochastically heterogeneous media. As for the physics of the macrodispersion process retrieved from both the experiments and the Monte Carlo simulations, we find reasonable agreement that, as expected, deterioriates somewhat as the density contrast and the variance of the permeability distribution of the porpus medium increase. Another aspect that will be discussed in detail is the different degree of sensitivity of the lateral macrodispersion to the various parameters describing the flow and the porous medium.

Starke, B.; Koch, M.

2005-12-01

256

Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...

Tseung, H Wan Chan; Beltran, C

2014-01-01

257

NASA Astrophysics Data System (ADS)

Mesoscopic transport through an ultrasmall quantum dot (QD) coupled to two single-wall carbon nanotube (SWCN) leads under microwave fields (MWFs) is investigated by employing the nonequilibrium Green’s function (NGF) technique. The charging energy and junction capacitances influence the output characteristics sensitively. The MWFs applied on the leads and gate induce novel photon-assisted tunnelling, strongly associated with the density of states (DOS) of the SWCN leads. The SWCN leads act as quantum wires, and the compound effect induces nonlinear current behavior and resonant tunnelling in a larger region of energy scale. Negative differential conductance (NDC) is clearly observed, as the source-drain junction capacitances C L , and C R are large enough. The multi-resonant NDC oscillation appears due to the charging and photon-electron pumping effects associated with the contribution of multi-channel quantum wires.

Zhao, Li-Na; Zhao, Hong-Kang

2004-11-01

258

Use of single scatter electron monte carlo transport for medical radiation sciences

The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

Svatos, Michelle M. (Oakland, CA)

2001-01-01

259

computer (PC). Because an FPGA is capable of executing Monte Carlo simulations with a high degree of parallelism, a simulation run on a large FPGA can be executed at a much higher rate than an equivalent simulation on a modern single-processor desktop PC...

Pasciak, Alexander Samuel

2007-04-25

260

NASA Astrophysics Data System (ADS)

We present a perturbative approach to derive the semiclassical equations of motion for the two-dimensional electron dynamics under the simultaneous presence of static electric and magnetic fields, where the quantized Hall conductance is known to be directly related to the topological properties of translationally invariant magnetic Bloch bands. In close analogy to this approach, we develop a perturbative theory of two-dimensional photonic transport in gyrotropic photonic crystals to mimic the physics of quantum Hall systems. We show that a suitable permittivity grading of a gyrotropic photonic crystal is able to simulate the simultaneous presence of analog electric and magnetic field forces for photons, and we rigorously derive the topology-related term in the equation for the electromagnetic energy velocity that is formally equivalent to the electronic case. A possible experimental configuration is proposed to observe a bulk photonic analog to the quantum Hall physics in graded gyromagnetic photonic crystals.

Esposito, Luca; Gerace, Dario

2013-07-01

261

The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. PMID:23877204

Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

2013-08-21

262

; published 19 December 2005 In the algorithm of Leksell GAMMAPLAN the treatment planning software of Leksell capsule, and the collimator system. In the algorithm of Leksell GAMMAPLAN, these scattered photons

Yu, K.N.

263

NASA Astrophysics Data System (ADS)

After developing various kinds of photodetectors such as phototubes, photomultiplier tubes, image pick up tubes, solid state photodetectors and a variety of light sources, we also started to develop integrated systems utilizing new detectors or imaging devices. These led us to the technology for a single photon counting imaging and detection of picosecond and femtosecond phenomena. Through those experiences, we gained the understanding that photon is a paste of substances, and yet we know so little about photon. By developing various technology for many fields such as analytical chemistry, high energy physics, medicine, biology, brain science, astronomy, etc., we are beginning to understand that the mind and life are based on the same matter, that is substance. Since humankind has so little knowledge about the substance concerning the mind and life, this makes some confusion on these subjects at this moment. If we explore photonics more deeply, many problems we now have in the world could be solved. By creating new knowledge and technology, I believe we will be able to solve the problems of illness, aging, energy, environment, human capability, and finally, the essential healthiness of the six billion human beings in the world.

Hiruma, Teruo

1993-04-01

264

NSDL National Science Digital Library

In this activity using an open space and a thick rope, students simulate the movement of photons from the Sun. The resource is part of the teacher's guide accompanying the video, NASA Why Files: The Case of the Mysterious Red Light. Lesson objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide.

265

Monte Carlo simulation studies of spin transport in graphene armchair nanoribbons

NASA Astrophysics Data System (ADS)

The research in the area of spintronics is gaining momentum due to the promise spintronics based devices have shown. Since spin degree of freedom of an electron is used to store and process information, spintronics can provide numerous advantages over conventional electronics by providing new functionalities. In this article, we study spin relaxation in graphene nanoribbons (GNR) of armchair type by employing semiclassical Monte Carlo approach. D'yakonov-Perel' relaxation due to structural inversion asymmetry (Rashba spin-orbit coupling) and Elliott-Yafet (EY) relaxation cause spin dephasing in armchair graphene nanoribbons. We investigate spin relaxation in ?-,?- and ?-armchair GNR with varying width and temperature.

Salimath, Akshay Kumar; Ghosh, Bahniman

2014-10-01

266

NASA Technical Reports Server (NTRS)

Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

1990-01-01

267

NASA Astrophysics Data System (ADS)

In this work, phonon transport in two-dimensional (2D) porous silicon structures with aligned pores is investigated by Monte Carlo simulations considering the frequency-dependent phonon mean free paths (MFPs). A boundary condition based on the periodic heat flux with constant virtual wall temperature is developed for the studied periodic structures. Such periodic boundary conditions enable the simulation of the lattice thermal conductivities with a minimum computational domain. For the 2D case, it is found that phonon size effects caused by the periodically arranged pores can be remarkable even when the pore size and spacing are much larger than the averaged phonon MFPs. Our results show the importance of considering the frequency dependence of phonon MFPs in the analysis of micro- and nanostructured materials.

Hao, Qing; Chen, Gang; Jeng, Ming-Shan

2009-12-01

268

Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.

Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09

269

Branching and path-deviation of positive streamers resulting from statistical photon transport

NASA Astrophysics Data System (ADS)

The branching and change in direction of propagation (path-deviation) of positive streamers in molecular gases such as air likely require a statistical process which perturbs the head of the streamer and produces an asymmetry in its space charge density. In this paper, the mechanisms for path-deviation and branching of atmospheric pressure positive streamer discharges in dry air are numerically investigated from the viewpoint of statistical photon transport and photoionization. A statistical photon transport model, based on randomly selected emitting angles and mean-free-path for absorption, was developed and embedded into a fluid-based plasma transport model. The hybrid model was applied to simulations of positive streamer coaxial discharges in dry air at atmospheric pressure. The results show that secondary streamers, often spatially isolated, are triggered by the random photoionization and interact with the thin space charge layer (SCL) of the primary streamer. This interaction may be partly responsible for path-deviation and streamer branching. The general process consists of random remote photo-electron production which initiates a back-traveling electron avalanche, collision of this secondary avalanche with the primary streamer and the subsequent perturbation to its SCL. When the SCL is deformed from a symmetric to an asymmetric shape, the streamer can experience an abrupt change in the direction of propagation. If the SCL is sufficiently perturbed and essentially broken, local maxima in the SCL can develop into new streamers, leading to streamer branching. During the propagation of positive streamers, this mechanism can take place repetitively in time and space, thus producing multi-level branching and more than two branches within one level.

Xiong, Zhongmin; Kushner, Mark J.

2014-12-01

270

Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks

Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta#14;f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear #14;B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.

Kimin Kim, Jong-Kyu Park, Gerrit Kramer and Allen H. Boozer

2012-04-11

271

NASA Astrophysics Data System (ADS)

Photon counting detector based on semiconductor materials is a promising imaging modality and provides many benefits for x-ray imaging compared with conventional detectors. This detector is able to measure the x-ray photon energy deposited by each event and provide the x-ray spectrum formed by detected photon. Recently, photon counting detectors have been developed for x-ray imaging. However, there has not been done many works for developing the novel x-ray imaging techniques and evaluating the image quality in x-ray system based on photon counting detectors. In this study, we simulated computed tomography (CT) images using projection-based and image-based energy weighting techniques and evaluate the effect of energy weighting in CT images. We designed the x-ray CT system equipped with cadmium telluride (CdTe) detector operating in the photon counting mode using Geant4 Application for Tomographic Emission (GATE) simulation. A micro focus X-ray source was modeled to reduce the flux of photons and minimize the spectral distortion. The phantom had a cylindrical shape of 30 mm diameter and consisted of ploymethylmethacrylate (PMMA) which includes the blood (1.06 g/cm3), iodine, and gadolinium (50 mg/cm3). The reconstructed images of phantom were acquired with projection-based and image-based energy weighting techniques. To evaluate the image quality, the contrast-to-noise ratio (CNR) is calculated as a function of the number of energy-bins. The CNR of both images acquired with energy weighting techniques were improved compared with those of integrating and counting images and increased as a function of the number of energy-bins. When the number of energy-bins was increased, the CNR in the image-based energy weighting image is higher than the projection-based energy weighting image. The results of this study show that the energy weighting techniques based on the photon counting detector can improve the image quality and the number of energy-bins used for generating the image is important.

Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

2012-03-01

272

NASA Astrophysics Data System (ADS)

Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.

Ding, D.; Chen, X.; Minnich, A. J.

2014-04-01

273

Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford University-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-level atom, is analyzed

Fan, Shanhui

274

Theory of single-photon transport in a single-mode waveguide. II. Coupling to a whispering- gallery resonator containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford interacting with a two-level atom. The single-photon transport properties such as the transmission

Fan, Shanhui

275

Characterization of photonic bandgap fiber for high-power narrow-linewidth optical transport

NASA Astrophysics Data System (ADS)

An investigation of the use of hollow-core photonic bandgap (PBG) fiber to transport high-power narrow-linewidth light is performed. In conventional fiber the main limitation in this case is stimulated Brillouin scattering (SBS) but in PBG fiber the overlap between the optical intensity and the silica that hosts the acoustic phonons is reduced. In this paper we show this should increase the SBS threshold to the multi-kW level even when including the non-linear interaction with the air in the core. A full model and experimental measurement of the SBS spectra is presented, including back-scatter into other optical modes besides the fundamental, and some of the issues of coupling high power into hollow-core fibers are discussed.

Bennett, Charlotte R.; Jones, David C.; Smith, Mark A.; Scott, Andrew M.; Lyngsoe, Jens K.; Jakobsen, Christian

2014-03-01

276

As a widely used numerical solution for the radiation transport equation (RTE), the discrete ordinates can predict the propagation of photons through biological tissues more accurately relative to the diffusion equation. The discrete ordinates reduce the RTE to a serial of differential equations that can be solved by source iteration (SI). However, the tremendous time consumption of SI, which is partly caused by the expensive computation of each SI step, limits its applications. In this paper, we present a graphics processing unit (GPU) parallel accelerated SI method for discrete ordinates. Utilizing the calculation independence on the levels of the discrete ordinate equation and spatial element, the proposed method reduces the time cost of each SI step by parallel calculation. The photon reflection at the boundary was calculated based on the results of the last SI step to ensure the calculation independence on the level of the discrete ordinate equation. An element sweeping strategy was proposed to detect the calculation independence on the level of the spatial element. A GPU parallel frame called the compute unified device architecture was employed to carry out the parallel computation. The simulation experiments, which were carried out with a cylindrical phantom and numerical mouse, indicated that the time cost of each SI step can be reduced up to a factor of 228 by the proposed method with a GTX 260 graphics card. PMID:21772362

Peng, Kuan; Gao, Xinbo; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; He, Xiaowei; Wang, Xiaorei; Liang, Jimin; Tian, Jie

2011-07-20

277

NASA Astrophysics Data System (ADS)

Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

2014-06-01

278

has continued to build on the effort of earlier years in two areas: (1) writing and validating computer codes to assist scientists and engineers in simulating space radiation in a wide range of environments, and (2) collaborating with scientists on specific problems where expertise with radiation transport calculations are an important component of the scientific analysis. In all cases discussed in this report, the Monte Carlo simulation package used is the radiation transport code FLUKA, 1,2 which is one of the most complete and sophisticated radiation transport codes available today. The data analysis package is

unknown authors

279

NASA Astrophysics Data System (ADS)

Space and ground level electronic equipment with semiconductor devices are always subjected to the deleterious effects by radiation. The study of ion-solid interaction can show the radiation effects of scattering and stopping of high speed atomic particles when passing through matter. This study had been of theoretical interest and of practical important in these recent years, driven by the need to control material properties at nanoscale. This paper is attempted to present the calculations of final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production of alpha (?) particle in Gallium Arsenide(GaAs) material. This calculation is being simulated using the Monte Carlo simulation, SRIM (Stopping and Range of Ions in Matter). The comparison of radiation tolerance between the conventional scale and nanoscale GaAs layer will be discussed as well. From the findings, it is observed that most of the damage formed in the GaAs layer induced by the production of lattice defects in the form of vacancies, defect clusters and dislocations. However, when the GaAs layer is scaled down (nanoscaling), it is found that the GaAs layer can withstand higher radiation energy, in term of displacement damage.

Amir, Haider F. Abdul; Chee, Fuei Pien

2012-09-01

280

A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES

We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

Schnittman, Jeremy D. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Krolik, Julian H., E-mail: jeremy.schnittman@nasa.gov, E-mail: jhk@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)

2013-11-01

281

NASA Astrophysics Data System (ADS)

A new method is presented to decouple the parameters of the incident e- beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening filter are removed from the beam path. This leads to lateral dose-profile curves, which depend only on the mean energy of the incident electron beam, since the effect of the radial intensity distribution of the incident e- beam is negligible when both filters are absent. The location of the primary collimator and the thickness and density of the target are not considered as adjustable parameters, since a satisfactory working Monte Carlo model is obtained for the low energy photon beam (6 MV) of the linac using the same target and primary collimator. This method was applied to conclude that the mean energy of the incident e- beam for the high energy photon beam (18 MV) of our Elekta SLi Plus linac is equal to 14.9 MeV. After optimizing the mean energy, the modelling of the filters, in accordance with the information provided by the manufacturer, can be verified by positioning only one filter in the linac head while the other is removed. It is also demonstrated that the parameter setting for Bremsstrahlung angular sampling in BEAMnrc ('Simple' using the leading term of the Koch and Motz equation or 'KM' using the full equation) leads to different dose-profile curves for the same incident electron energy for the studied 18 MV beam. It is therefore important to perform the calculations in 'KM' mode. Note that both filters are not physically removed from the linac head. All filters remain present in the linac head and are only rotated out of the beam. This makes the described method applicable for practical usage since no recommissioning process is required.

DeSmedt, B.; Reynaert, N.; Flachet, F.; Coghe, M.; Thompson, M. G.; Paelinck, L.; Pittomvils, G.; DeWagter, C.; DeNeve, W.; Thierens, H.

2005-12-01

282

Underdosing of treatment targets can occur in radiation therapy due to electronic disequilibrium around air-tissue interfaces when tumors are situated near natural air cavities. These effects have been shown to increase with the beam energy and decrease with the field size. Intensity modulated radiation therapy (IMRT) and tomotherapy techniques employ combinations of multiple small radiation beamlets of varying intensities to deliver highly conformal radiation therapy. The use of small beamlets in these techniques may therefore result in underdosing of treatment target in the air-tissue interfaces region surrounding an air cavity. This work was undertaken to investigate dose reductions near the air-water interfaces of 1×1×1 and 3×3×3 cm3 air cavities, typically encountered in the treatment of head and neck cancer utilizing radiation therapy techniques such as IMRT and tomotherapy using small fields of Co-60, 6 MV and 15 MV photons. Additional investigations were performed for larger photon field sizes encompassing the entire air-cavity, such as encountered in conventional three dimensional conformal radiation therapy (3DCRT) techniques. The EGSnrc/DOSXYZnrc Monte Carlo code was used to calculate the dose reductions (in water) in air-water interface region for single, parallel opposed and four field irradiations with 2×2 cm2 (beamlet), 10×2 cm2 (fan beam), 5×5 and 7×7 cm2 field sizes. The magnitude of dose reduction in water near air-water interface increases with photon energy; decreases with distance from the interface as well as decreases as the number of beams are increased. No dose reductions were observed for large field sizes encompassing the air cavities. The results demonstrate that Co-60 beams may provide significantly smaller interface dose reductions than 6 MV and 15 MV irradiations for small field irradiations such as used in IMRT and tomotherapy. PMID:20589116

Joshi, Chandra P.; Darko, Johnson; Vidyasagar, P. B.; Schreiner, L. John

2010-01-01

283

Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

M. F. Preston; L. S. Myers; J. R. M. Annand; K. G. Fissum; K. Hansen; L. Isaksson; R. Jebali; M. Lundin

2013-11-22

284

The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)

TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.

Rhoades, W.A.; Simpson, D.B.

1997-10-01

285

A simplified spherical harmonic method for coupled electron-photon transport calculations

In this thesis the author has developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. He has performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. The theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. In addition, he has applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well as for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. The author has investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems.

Josef, J.A.

1997-12-01

286

Status of Monte Carlo at Los Alamos

At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

Thompson, W.L.; Cashwell, E.D.

1980-01-01

287

NASA Astrophysics Data System (ADS)

This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

Lodwick, Camille J.

288

Development and modification of a virtual source model for Monte Carlo based IMRT verification

A Monte Carlo based phase space source model has been developed and modified to allow for IMRT dose verification. The model allows for the simulation of arbitrary intensity distributions without the inefficient step of calculating particle transport through the field defining collimators. The components of the linear accelerator treatment head were simulated using the code MCNP4B, providing the photon fluence

Randi Fogg; Indrin Chetty; J. J. DeMarco; N. Agazaeyan; T. D. Solberg

2000-01-01

289

Monte Carlo simulation of DNA damage induction by x-rays and selected radioisotopes

To better assess the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy, we used the PENELOPE Monte Carlo radiation transport code to estimate the spectrum of initial electrons produced by photons in single cells and in an irradiation geometry similar to those used in cell culture experiments. We then combined estimates of the initial spectrum

Y. Hsiao; R. D. Stewart

2008-01-01

290

Current status and new horizons in Monte Carlo simulation of X-ray CT scanners

With the advent of powerful computers and parallel processing including Grid technology, the use of Monte Carlo (MC) techniques for radiation transport simu- lation has become the most popular method for modeling radiological imaging systems and particularly X-ray com- puted tomography (CT). The stochastic nature of involved processes such as X-ray photons generation, interaction with matter and detection makes MC

Habib Zaidi; Mohammad Reza Ay

2007-01-01

291

Three-dimensional forest light interaction model using a Monte Carlo method

A model for light interaction with forest canopies is presented, based on Monte Carlo simulation of photon transport. A hybrid representation is used to model the discontinuous nature of the forest canopy. Large scale structure is represented by geometric primitives defining shapes and positions of the tree crowns and trunks. Foliage is represented within crowns by volume-averaged parameters describing the

Peter R. J. North

1996-01-01

292

Electron Transport in Silicon Nanocrystal Devices: From Memory Applications to Silicon Photonics

NASA Astrophysics Data System (ADS)

The push to integrate the realms of microelectronics and photonics on the silicon platform is currently lacking an efficient, electrically pumped silicon light source. One promising material system for photonics on the silicon platform is erbium-doped silicon nanoclusters (Er:Si-nc), which uses silicon nanoclusters to sensitize erbium ions in a SiO2 matrix. This medium can be pumped electrically, and this thesis focuses primarily on the electrical properties of Er:Si-nc films and their possible development as a silicon light source in the erbium emission band around 1.5 micrometers. Silicon nanocrystals can also be used as the floating gate in a flash memory device, and work is also presented examining charge transport in novel systems for flash memory applications. To explore silicon nanocrystals as a potential replacement for metallic floating gates in flash memory, the charging dynamics in silicon nanocrystal films are first studied using UHV-AFM. This approach uses a non-contact AFM tip to locally charge a layer of nanocrystals. Subsequent imaging allows the injected charge to be observed in real time as it moves through the layer. Simulation of this interaction allows the quantication of the charge in the layer, where we find that each nanocrystal is only singly charged after injection, while holes are retained in the film for hours. Work towards developing a dielectric stack with a voltage-tunable barrier is presented, with applications for flash memory and hyperspectral imaging. For hyperspectral imaging applications, film stacks containing various dielectrics are studied using I-V, TEM, and internal photoemission, with barrier tunability demonstrated in the Sc2O3/SiO2 system. To study Er:Si-nc as a potential lasing medium for silicon photonics, a theoretical approach is presented where Er:Si-nc is the gain medium in a silicon slot waveguide. By accounting for the local density of optical states effect on the emitters, and carrier absorption due to electrical pumping, it is shown that a pulsed excitation method is needed to achieve gain in this system. A gain of up to 2 db/cm is predicted for an electrically pumped gain medium 50 nm thick. To test these predictions Er:Si-nc LEDs were fabricated and studied. Reactive oxygen sputtering is found to produce more robust films, and the electrical excitation cross section found is two orders of magnitude larger than the optical cross section. The fabricated devices exhibited low lifetimes and low current densities which prevent observation of gain, and the modeling is used to predict how the films must be improved to achieve gain and lasing in this system.

Miller, Gerald M.

293

MCNP/X TRANSPORT IN THE TABULAR REGIME

The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

HUGHES, H. GRADY [Los Alamos National Laboratory

2007-01-08

294

NASA Astrophysics Data System (ADS)

A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (? = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (? = 0.035 g cm-3), normal lung (? = 0.20 g cm-3) and cortical bone tissue (? = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, ? index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (? = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

2007-03-01

295

Monte Carlo simulation study of RPC-based 0.511 MeV photon detector with GEANT4

NASA Astrophysics Data System (ADS)

The Resistive Plate Chambers (RPC) are low-cost charged-particle detectors with good timing resolution and potentially good spatial resolution. Using RPC as gamma detector provides an opportunity for application in positron emission tomography (PET). In this work, we use the GEANT4 simulation package to study various methods improving the detection efficiency of a realistic RPC-based PET model for 511 keV photons, by adding more detection units, changing the thickness of each layer, choosing different converters and using the multi-gaps RPC (MRPC) technique. The balance among these factor is discussed. It's found that although RPC with materials of high atomic number can reach a higher efficiency, they may contribute to a poor spatial resolution and higher background level.

Zhou, W.; Shao, M.; Li, C.; Chen, H.; Sun, Y.; Chen, T.

2014-09-01

296

Angular biasing in implicit Monte-Carlo

Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

Zimmerman, G.B.

1994-10-20

297

The F{sub N} basis function expansion solution to the Boltzmann transport equation in Cartesian geometry is summarized and evaluated for several heterogeneous slabs of interest. The resultant scalar and angular fluxes and the critical slab thickness (when applicable) compare to the Monte Carlo transport evaluations by MCNP. A correspondence between the one-group macroscopic cross section used in the FN code is made to energy independent synthetic MCNP microscopic cross sections. The FN method produces comparable results to MCNP, requires fewer computer resources, but is limited to specific problem types.

Singleterry, R.C. Jr. [Argonne National Lab., Idaho Falls, ID (United States); Jahshan, S. [SNJ Consulting, Idaho Falls, ID (United States)

1996-04-01

298

NASA Astrophysics Data System (ADS)

Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.

2014-09-01

299

NASA Astrophysics Data System (ADS)

A novel approach is proposed for charged particle transport problems using a recently developed second-order, self-adjoint angular flux (SAAF) form of the Boltzmann transport equation with continuous slowing-down (CSD). A linear continuous (LC) in space and linear discontinuous (LD) in energy finite element discretization is implemented in the computer code DOET1D : ?iscrete O&barbelow;rdinates E&barbelow;lectron-Photon ?ransport in 1D. DOET1D is a one-dimensional, Cartesian coordinates, multigroup, discrete ordinates code for charged particle transport which employs CEPXS generated cross-sections to incorporate electron and photon transport physics. The discrete ordinates SAAF transport equation is solved using scattering source iteration in conjunction with diffusion synthetic acceleration (DSA). The angular fluxes are computed simultaneously at all mesh points by solving a system of equations for each direction and each energy group. The application of LC finite elements in space yields a symmetric, positive definite coefficient matrix which is tridiagonal in structure and solved efficiently using a standard tridiagonal matrix solver. A second and unique within group iteration, referred to as upscatter, is introduced by the LD energy discretization. The upscatter iteration is separate from the source iteration and requires an independent acceleration scheme. A synthetic acceleration technique is derived to increase the rate of convergence of the upscatter iteration and implemented successfully in DOET1D . The estimated spectral radius for the accelerated equations is sufficiently small that an efficient algorithm is achieved by performing at most two iterations for the DSA and upscatter steps. Accurate charge and dose deposition profiles were obtained from the LD SAAF equation for several coupled electron-photon transport problems. Most importantly, it is demonstrated that the LD SAAF equation is able to accurately resolve charge and dose deposition at material interfaces between high-Z and low-Z materials.

Liscum-Powell, Jennifer Lane

2000-10-01

300

The percentage depth dose in the build-up region and the surface dose for the 6-MV photon beam from a Varian Clinac 23EX medical linear accelerator was investigated for square field sizes of 5 × 5, 10 × 10, 15 × 15 and 20 × 20 cm(2)using the EGS4nrc Monte Carlo (MC) simulation package. The depth dose was found to change rapidly in the build-up region, and the percentage surface dose increased proportionally with the field size from approximately 10% to 30%. The measurements were also taken using four common detectors: TLD chips, PFD dosimeter, parallel-plate and cylindrical ionization chamber, and compared with MC simulated data, which served as the gold standard in our study. The surface doses obtained from each detector were derived from the extrapolation of the measured depth doses near the surface and were all found to be higher than that of the MC simulation. The lowest and highest over-responses in the surface dose measurement were found with the TLD chip and the CC13 cylindrical ionization chamber, respectively. Increasing the field size increased the percentage surface dose almost linearly in the various dosimeters and also in the MC simulation. Interestingly, the use of the CC13 ionization chamber eliminates the high gradient feature of the depth dose near the surface. The correction factors for the measured surface dose from each dosimeter for square field sizes of between 5 × 5 and 20 × 20 cm(2)are introduced. PMID:23104898

Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn

2013-03-01

301

The percentage depth dose in the build-up region and the surface dose for the 6-MV photon beam from a Varian Clinac 23EX medical linear accelerator was investigated for square field sizes of 5 × 5, 10 × 10, 15 × 15 and 20 × 20 cm2using the EGS4nrc Monte Carlo (MC) simulation package. The depth dose was found to change rapidly in the build-up region, and the percentage surface dose increased proportionally with the field size from approximately 10% to 30%. The measurements were also taken using four common detectors: TLD chips, PFD dosimeter, parallel-plate and cylindrical ionization chamber, and compared with MC simulated data, which served as the gold standard in our study. The surface doses obtained from each detector were derived from the extrapolation of the measured depth doses near the surface and were all found to be higher than that of the MC simulation. The lowest and highest over-responses in the surface dose measurement were found with the TLD chip and the CC13 cylindrical ionization chamber, respectively. Increasing the field size increased the percentage surface dose almost linearly in the various dosimeters and also in the MC simulation. Interestingly, the use of the CC13 ionization chamber eliminates the high gradient feature of the depth dose near the surface. The correction factors for the measured surface dose from each dosimeter for square field sizes of between 5 × 5 and 20 × 20 cm2are introduced. PMID:23104898

Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn

2013-01-01

302

A new method is presented to decouple the parameters of the incident e(-) beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening filter are removed from the beam path. This leads to lateral dose-profile curves, which depend only on the mean energy of the incident electron beam, since the effect of the radial intensity distribution of the incident e- beam is negligible when both filters are absent. The location of the primary collimator and the thickness and density of the target are not considered as adjustable parameters, since a satisfactory working Monte Carlo model is obtained for the low energy photon beam (6 MV) of the linac using the same target and primary collimator. This method was applied to conclude that the mean energy of the incident e- beam for the high energy photon beam (18 MV) of our Elekta SLi Plus linac is equal to 14.9 MeV. After optimizing the mean energy, the modelling of the filters, in accordance with the information provided by the manufacturer, can be verified by positioning only one filter in the linac head while the other is removed. It is also demonstrated that the parameter setting for Bremsstrahlung angular sampling in BEAMnrc ('Simple' using the leading term of the Koch and Motz equation or 'KM' using the full equation) leads to different dose-profile curves for the same incident electron energy for the studied 18 MV beam. It is therefore important to perform the calculations in 'KM' mode. Note that both filters are not physically removed from the linac head. All filters remain present in the linac head and are only rotated out of the beam. This makes the described method applicable for practical usage since no recommissioning process is required. PMID:16333165

De Smedt, B; Reynaert, N; Flachet, F; Coghe, M; Thompson, M G; Paelinck, L; Pittomvils, G; De Wagter, C; De Neve, W; Thierens, H

2005-12-21

303

NASA Astrophysics Data System (ADS)

The image forming process in a CdTe detector is both a function of the X-ray interaction in the material, including scattering and fluorescence, and the charge transport in the material [2-4]. The response to individual photons has been investigated using a CdTe detector with a pixel size of 110?m, bonded to a TIMEPIX [5] readout chip operating in time over threshold mode. The device has been illuminated with mono-energetic photons generated by fluorescence in different metals and by gamma emission from 241Am and 137Cs. Each interaction will result in charge distributed in a cluster of pixels where the total charge in the cluster should sum up to the initial photon energy. By looking at the individual clusters the response from shared photons as well as fluorescence photons can be identified and separated. By using energies below and above the K-edges of Cd and Te the contribution from fluorescence can be further isolated. The response is analyzed to investigate the effects of both charge diffusion and fluorescence on the spectral response in the detector.

Fröjdh, E.; Norlin, B.; Thungström, G.; Fröjdh, C.

2011-02-01

304

Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

2012-07-01

305

NASA Astrophysics Data System (ADS)

Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4???/S<2.5/4?, in the temperature range from 170 to 440 MeV.

Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.

2013-03-01

306

Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

NASA Astrophysics Data System (ADS)

Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

2008-02-01

307

The single-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-leval atom is analyzed. The single-photon transmission and reflection amplitudes, as well as the cavity and the atom excitation amplitudes, are solved exactly via a real-space approach. It is shown that the dissipation of the cavity and of the atom respectively affects distinctively on the transport properties of the photons, and on the relative phase between the excitation amplitudes of the cavity mode and the atom.

Jung-Tsung Shen; Shanhui Fan

2009-01-26

308

Hybrid Mie-MCML Monte Carlo simulation of light propagation in skin layers

NASA Astrophysics Data System (ADS)

Monte Carlo modeling of light transport in multi-layered tissues (MCML) has been used for simulating light transport in human skin layers. The Monte Carlo simulations can perform ray tracing of light on the basis of optical energy. A hybrid simulator combining MCML with Mie scattering theory (HMCS) has been developed in this study to analyze light propagating in human skin on the amplitude basis. The HMCS and MCML are compared in terms of diffused light intensity profile in the skin surface and photon fluence in the penetration for the three-layered model of skin tissue.

Kawai, Yu; Iwai, Toshiaki

2014-08-01

309

NASA Astrophysics Data System (ADS)

Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.

Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.

2010-04-01

310

Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.

Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide, South Australia 5000 (Australia); Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide South Australia 5000 (Australia)

2013-07-15

311

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

312

A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63–82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management. PMID:18697528

Bednarz, Bryan; Xu, X. George

2008-01-01

313

A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. PMID:22538321

Rodriguez, M; Sempau, J; Brualla, L

2012-05-21

314

NASA Astrophysics Data System (ADS)

A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

Rodriguez, M.; Sempau, J.; Brualla, L.

2012-05-01

315

At Budker Institute of Nuclear Physics, epithermal neutron source for neutron-capture therapy was built and neutron generation was realized. Source is based on tandem accelerator and uses near-threshold neutron generation from the reaction (7)Li(p,n)(7)Be. The paper describes target optimization through the numerical simulation of proton, neutron and gamma transport by Monte Carlo method (PRIZMA code). It is shown that the near-threshold mode attractive due low activation provides high efficiency of the dose and acceptable therapeutic ratio and advantage depth. PMID:21482125

Kandiev, Ya; Kashaeva, E; Malyshkin, G; Bayanov, B; Taskaev, S

2011-12-01

316

High-energy photon transport modeling for oil-well logging

Nuclear oil well logging tools utilizing radioisotope sources of photons are used ubiquitously in oilfields throughout the world. Because of safety and security concerns, there is renewed interest in shifting to ...

Johnson, Erik D., Ph. D. Massachusetts Institute of Technology

2009-01-01

317

A novel approach is proposed for charged particle transport calculations using a recently developed second-order, self-adjoint angular flux (SAAF) form of the Boltzmann transport equation with continuous slowing-down. A finite element discretization that is linear continuous in space and linear discontinuous (LD) in energy is described and implemented in a one-dimensional, planar geometry, multigroup, discrete ordinates code for charged particle transport. The cross-section generating code CEPXS is used to generate the electron and photon transport cross sections employed in this code. The discrete ordinates SAAF transport equation is solved using source iteration in conjunction with an inner iteration acceleration scheme and an outer iteration acceleration scheme. Outer iterations are required with the LD energy discretization scheme because the two angular flux unknowns within each group are coupled, which gives rise to effective upscattering. The inner iteration convergence is accelerated using diffusion synthetic acceleration, and the outer iteration convergence is accelerated using a diamond difference approximation to the LD energy discretization. Computational results are given that demonstrate the effectiveness of our convergence acceleration schemes and the accuracy of our discretized SAAF equation.

Liscum-Powell, Jennifer L. [Sandia National Laboratories (United States); Prinja, Anil B. [University of New Mexico (United States); Morel, Jim E. [Los Alamos National Laboratory (United States); Lorence, Leonard J Jr. [Sandia National Laboratories (United States)

2002-11-15

318

) computational method that has been studied on a number of alternative architecture [1,2,3,4,5,6]. Currently to improve parallel performance of scientific codes. This paper summarizes recent experiences with adapting

Majumdar, Amit

319

Modelling plastic scintillator response to gamma rays using light transport incorporated FLUKA code.

The response function of NE102 plastic scintillator to gamma rays has been simulated using a joint FLUKA+PHOTRACK Monte Carlo code. The multi-purpose particle transport code, FLUKA, has been responsible for gamma transport whilst the light transport code, PHOTRACK, has simulated the transport of scintillation photons through scintillator and lightguide. The simulation results of plastic scintillator with/without light guides of different surface coverings have been successfully verified with experiments. PMID:22341953

Ranjbar Kohan, M; Etaati, G R; Ghal-Eh, N; Safari, M J; Afarideh, H; Asadi, E

2012-05-01

320

Monte Carlo simulations in SPET and PET

Monte Carlo methods are extensively used in Nuclear Medicine to tackle a variety of problems that are diffi- cult to study by an experimental or analytical approach. A review of the most recent tools allowing application of Monte Carlo methods in single photon emission tomography (SPET) and positron emission tomography (PET) is presented. To help potential Monte Carlo users choose

I. Buvat; I. Castiglioni

2002-01-01

321

NASA Astrophysics Data System (ADS)

We study theoretically the possible origin of a double-peak fine structure of Surface Relief Gratings in azo-functionalized poly(etherimide) reported recently in experiments. To improve the statistics of experimental data additional measurements were done. For the theoretical analysis we develop a stochastic Monte Carlo model for photoinduced mass transport in azobenzene-functionalized polymer matrix. The long sought-after transport of polymer chains from bright to dark places of the illumination pattern is demonstrated and characterized, various scenarios for the intertwined processes of build-up of density and SRG gratings are examined. Model predicts that for some azo-functionalized materials double-peak SRG maxima can develop in the permanent, quasi-permanent or transient regimes. Available experimental data are interpreted in terms of model's predictions.

Pawlik, G.; Miniewicz, A.; Sobolewska, A.; Mitus, A. C.

2014-01-01

322

The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.

Johnson, J.O.

2000-10-23

323

Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning

An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661

Jabbari, Keyvan

2011-01-01

324

links three external codes together to create these libraries. The code creates an MCNP (Monte Carlo N-Particle) model of the reactor and calculates the zoneaveraged scalar flux in various tally regions and a core-averaged scalar flux tallied by energy...

Hiatt, Matthew Torgerson

2009-06-02

325

We present a semiconductor master equation technique to study the input/output characteristics of coherent photon transport in a semiconductor waveguide-cavity system containing a single quantum dot. We use this approach to investigate the effects of photon propagation and anharmonic cavity-QED for various dot-cavity interaction strengths, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down, even in the weak coupling regime. As a measure of the anharmonic multiphoton-correlations, we compute the Fano factor and the correlation error associated with making a semiclassical approximation. We also explore the role of electron--acoustic-phonon scattering and find that phonon-mediated scattering plays a qualitatively important role on the light propagation characteristics. As an application of the theory, we simulate a conditional phase gate at a phonon bath temperature of $20 $K in the strong coupling regime.

S. Hughes; C. Roy

2011-11-17

326

NASA Astrophysics Data System (ADS)

The limiting factors for the scintigraphic clinical application are related to i) biosource characteristics (pharmacokinetic of the drug distribution between organs), Detection chain (photons transport, scintillation, analog to digital signal conversion, etc.) Imaging (Signal to Noise ratio, Spatial and Energy Resolution, Linearity etc) In this work, by using Monte Carlo time resolved transport simulations on a mathematical phantom and on a small field of view scintigraphic device, the trade off between the aforementioned factors was preliminary investigated.

Burgio, N.; Ciavola, C.; Santagata, A.; Iurlaro, G.; Montani, L.; Scafè, R.

2006-04-01

327

Three-dimensional representation of leaf anatomy - Application of photon transport

Great progress has been made within the last decade in the modeling of radiation transfer in vegetation canopies and recently in plant leaves. Radiosity or ray tracing models have opened new prospects in the application of remote sensing to agriculture and ecology. At the leaf scale, it is now possible to track a single photon from cell to cell and

S. Jacquemoud; J.-P. Frangi; Y. Govaerts

328

Simulation of neutron\\/photon die-away in a pure water system using MCNP4B

It is becoming commonplace in the well-logging industry to simulate many different types of nuclear well-logging techniques using Monte Carlo neutron\\/photon transport codes, such as MCNP, to fill the gaps where experimental data are not available. However, Monte Carlo simulations can become very time consuming because a large number of particle histories are usually necessary to obtain results that are

J. A. Miller; G. D. Spriggs

1998-01-01

329

The crucial problem for radiation shielding design at heavy ion accelerator facilities with beam energies of several GeV\\/n is the source term problem. Experimental data on double differential neutron yields from thick targets irradiated with high-energy uranium nuclei are lacking. At present there are not many Monte Carlo multipurpose codes that can work with primary high-energy uranium nuclei. These codes

L. Beskrovnaia; B. Florko; M. Paraipan; N. Sobolevsky; G. Timoshenko

2008-01-01

330

The role of plasma evolution and photon transport in optimizing future advanced lithography sources

Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.

Sizyuk, Tatyana; Hassanein, Ahmed [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)] [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

2013-08-28

331

An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

Burns, T.J.

1994-03-01

332

Fast Monte Carlo for radiation therapy: the PEREGRINE Project

The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

1997-11-11

333

We present the implementation, validation, and performance of a Neumann-series approach for simulating light propagation at optical wavelengths in uniform media using the radiative transport equation (RTE). The RTE is solved for an anisotropic-scattering medium in a spherical harmonic basis for a diffuse-optical-imaging setup. The main objectives of this paper are threefold: to present the theory behind the Neumann-series form for the RTE, to design and develop the mathematical methods and the software to implement the Neumann series for a diffuse-optical-imaging setup, and, finally, to perform an exhaustive study of the accuracy, practical limitations, and computational efficiency of the Neumann-series method. Through our results, we demonstrate that the Neumann-series approach can be used to model light propagation in uniform media with small geometries at optical wavelengths. PMID:23201893

Jha, Abhinav K.; Kupinski, Matthew A.; Masumura, Takahiro; Clarkson, Eric; Maslov, Alexey V.; Barrett, Harrison H.

2014-01-01

334

High-speed DC transport of emergent monopoles in spinor photonic fluids.

We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles. PMID:25083658

Terças, H; Solnyshkov, D D; Malpuech, G

2014-07-18

335

We investigate the dispersion and transmission property of slow-light coupled-resonator optical waveguides that consist of more than 100 ultrahigh-Q photonic crystal cavities. We show that experimental group-delay spectra exhibited good agreement with numerically calculated dispersions obtained with the three-dimensional plane wave expansion method. Furthermore, a statistical analysis of the transmission property indicated that fabrication fluctuations in individual cavities are less relevant than in the localized regime. These behaviors are observed for a chain of up to 400 cavities.

Matsuda, Nobuyuki; Takesue, Hiroki; Notomi, Masaya

2014-01-01

336

The triple- and quadruple-escape peaks of 6.128 MeV photons from the F(p,alphagamma)19O16 nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code PENELOPE.

N. L. Maidana; L. Brualla; V. R. Vanin; J. R. B. Oliveira; M. A. Rizzutto; E. Do Nascimento; J. M. Fernández-Varea

2010-01-01

337

The triple- and quadruple-escape peaks of 6.128MeV photons from the F(p,??)19O16 nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code penelope. The

N. L. Maidana; L. Brualla; V. R. Vanin; J. R. B. Oliveira; M. A. Rizzutto; E. do Nascimento; J. M. Fernández-Varea

2010-01-01

338

NASA Astrophysics Data System (ADS)

Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.

Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.

2012-03-01

339

, particle habit, surface albedo, and surface temperature. It is found that inclusion of 3D transport results that the total shortwave and longwave forcings largely cancel during the day means that the relative change of the net forcing of the contrail, in other cases changing its sign. On a more general note, the relatively

Hogan, Robin

340

MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

Marcus, Ryan C. [Los Alamos National Laboratory

2012-07-25

341

NASA Astrophysics Data System (ADS)

In this paper, we present simulations of some of the most relevant transport properties of the inversion layer of ultra-thin film SOI devices with a self-consistent Monte-Carlo transport code for a confined electron gas. We show that size induced quantization not only decreases the low-field mobility (as experimentally found in [Uchida K, Koga J, Ohba R, Numata T, Takagi S. Experimental eidences of quantum-mechanical effects on low-field mobility, gate-channel capacitance and threshold voltage of ultrathin body SOI MOSFETs, IEEE IEDM Tech Dig 2001;633-6; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E. Low field electron and hole mobility of SOI transistors fabricated on ultra-thin silicon films for deep sub-micron technology application. IEEE Trans Electron Dev 2001;48(12):2842-50; Esseni D, Mastrapasqua M, Celler GK, Fiegna C, Selmi L, Sangiorgi E, An experimental study of mobility enhancement in ultra-thin SOI transistors operated in double-gate mode, IEEE Trans Electron Dev 2003;50(3):802-8. [1-3

Lucci, Luca; Palestri, Pierpaolo; Esseni, David; Selmi, Luca

2005-09-01

342

NASA Astrophysics Data System (ADS)

A new phenomenological approach is developed to reproduce the stochastic distributions of secondary particle energy and angle with conservation of momentum and energy in reactions ejecting more than one ejectiles using inclusive cross-section data. The summation of energy and momentum in each reaction is generally not conserved in Monte-Carlo particle transport simulation based on the inclusive cross-sections because the particle correlations are lost in the inclusive cross-section data. However, the energy and angular distributions are successfully reproduced by randomly generating numerous sets of secondary particle configurations which are compliant with the conservation laws, and sampling one set considering their likelihood. This developed approach was applied to simulation of (n,xn) reactions (x?2) of various targets and to other reactions such as (n,np) and (n,2n?). The calculated secondary particle energy and angular distributions were compared with those of the original inclusive cross-section data to validate the algorithm. The calculated distributions reproduce the trend of original cross-section data considerably well especially in case of heavy targets. The developed algorithm is beneficial to improve the accuracy of event-by-event analysis in particle transport simulation.

Ogawa, T.; Sato, T.; Hashimoto, S.; Niita, K.

2014-11-01

343

Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code

DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.

Rhoades, W.A.; Childs, R.L.

1982-07-01

344

MCNP{trademark} Monte Carlo: A precis of MCNP

MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

Adams, K.J.

1996-06-01

345

Monte Carlo algorithms are developed to calculate the ensemble-average particle leakage through the boundaries of a 2-D binary stochastic material. The mixture is specified within a rectangular area and consists of a fixed number of disks of constant radius randomly embedded in a matrix material. The algorithms are extensions of the proposal of Zimmerman et al., using chord-length sampling to eliminate the need to explicitly model the geometry of the mixture. Two variations are considered. The first algorithm uses Chord-Length Sampling (CLS) for both material regions. The second algorithm employs Limited Chord Length Sampling (LCLS), only using chord-length sampling in the matrix material. Ensemble-average leakage results are computed for a range of material interaction coefficients and compared against benchmark results for both accuracy and efficiency. both algorithms are exact for purely absorbing materials and provide decreasing accuracy as scattering is increased in the matrix material. The LCLS algorithm shows a better accuracy than the CLS algorithm for all cases while maintaining an equivalent or better efficiency. Accuracy and efficiency problems with the CLS algorithm are due principally to assumptions made in determining the chord-length distribution within the disks.

T.J. Donovan; Y. Danon

2002-03-15

346

A novel quantum transport simulation method based on the real-time path integral is presented. Applying the WKB approximation, the density matrix of an electron and phonon bath system is reduced to a numerically tractable form. The WKB approximation brings about two non-Markovian Langevin equations for center-of-mass and relative coordinates. Simulation is reduced to solve these classical equations. Another significant assumption

K. Katayama; S. Kamohara; S. Itoh

1990-01-01

347

Photon Maps Photon Tracing Simulating light propagation by shooting photons from the light sources. Photon Tracing Storing the incidences of photon's path. Implementing surface properties statistically. Russian Roulette. Photon Tracing Photon maps keep: Incidence point (in 3D). The normal at that point

Lischinski, Dani

348

The accuracy with which Monte Carlo models of photon beams generated by linear accelerators (linacs) can describe small-field dose distributions depends on the modeled width of the electron beam profile incident on the linac target. It is known that the electron focal spot width affects penumbra and cross-field profiles; here, the authors explore the extent to which source occlusion reduces linac output for smaller fields and larger spot sizes. A BEAMnrc Monte Carlo linac model has been used to investigate the variation in penumbra widths and small-field output factors with electron spot size. A formalism is developed separating head scatter factors into source occlusion and flattening filter factors. Differences between head scatter factors defined in terms of in-air energy fluence, collision kerma, and terma are explored using Monte Carlo calculations. Estimates of changes in kerma-based source occlusion and flattening filter factors with field size and focal spot width are obtained by calculating doses deposited in a narrow 2 mm wide virtual "milliphantom" geometry. The impact of focal spot size on phantom scatter is also explored. Modeled electron spot sizes of 0.4-0.7 mm FWHM generate acceptable matches to measured penumbra widths. However the 0.5 cm field output factor is quite sensitive to electron spot width, the measured output only being matched by calculations for a 0.7 mm spot width. Because the spectra of the unscattered primary (psi(pi)) and head-scattered (psi(sigma)) photon energy fluences differ, miniphantom-based collision kerma measurements do not scale precisely with total in-air energy fluence psi = (psi(pi) + psi(sigma) but with (psi(pi)+ 1.2psi(sigma)). For most field sizes, on-axis collision kerma is independent of the focal spot size; but for a 0.5 cm field size and 1.0 mm spot width, it is reduced by around 7% mostly due to source occlusion. The phantom scatter factor of the 0.5 cm field also shows some spot size dependence, decreasing by 6% (relative) as spot size is increased from 0.1 to 1.0 mm. The dependence of small-field source occlusion and output factors on the focal spot size makes this a significant factor in Monte Carlo modeling of small (< 1 cm) fields. Changes in penumbra width with spot size are not sufficiently large to accurately pinpoint spot widths. Consequently, while Monte Carlo models based exclusively on large-field data can quite accurately predict small-field profiles and PDDs, in the absence of experimental methods of determining incident electron beam profiles it will remain necessary to measure small-field output factors, fine-tuning modeled spot sizes to ensure good matching between the Monte Carlo and the measured output factors. PMID:19673212

Scott, Alison J D; Nahum, Alan E; Fenwick, John D

2009-07-01

349

Effects of breaking various symmetries on optical properties in ordered materials have been studied. Photonic crystals lacking space-inversion and time-reversal symmetries were shown to display nonreciprocal dispersion ...

Bita, Ion

2006-01-01

350

a Convolution Model for Energy Transport in a Therapeutic Fast Neutron Beam

A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available

Michael Farley Moyers

1991-01-01

351

find that charge-stabilized colloids form face-centered cubic crystals at all densities up to 60 vol to spontaneously form bulk three-dimensional (3D) crystals with lattice parameters on the order of 1-1000 nm.2 of fcc packing. We find that our samples do form photonic fcc crystals over a wide range of densities

Vos, Willem L.

352

Monte Carlo simulations of plutonium gamma-ray spectra

Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum.

Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

1993-07-16

353

NASA Technical Reports Server (NTRS)

An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

1971-01-01

354

Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a 'D'-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm(2). Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations' run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator. PMID:23123926

Brualla, L; Mayorga, P A; Flühs, A; Lallena, A M; Sempau, J; Sauerwein, W

2012-11-21

355

NASA Astrophysics Data System (ADS)

Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.

Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.

2012-11-01

356

NASA Technical Reports Server (NTRS)

In my presentation, I will describe several approximation methods with different level of complexity; they will be gradually applied to simple examples of horizontally inhomogeneous clouds. Understanding of photon horizontal transport and radiative smoothing can help to improve accuracy of the methods The accuracy of the methods will be compared with the full Monte Carlo calculations. The specifics of Monte Carlo in cloudy atmospheres will be also discussed. A special emphasis will be put on the strong forward scattering peak in the phase functions.

Marshak, Alexander

2004-01-01

357

The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.

Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.

2004-12-22

358

Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. PMID:24162375

Ali, F; Waker, A J; Waller, E J

2014-10-01

359

Monte-Carlo estimation of the inflight performance of the GEMS satellite x-ray polarimeter

NASA Astrophysics Data System (ADS)

We report a Monte-Carlo estimation of the in-orbit performance of a cosmic X-ray polarimeter designed to be installed on the focal plane of a small satellite. The simulation uses GEANT for the transport of photons and energetic particles and results from Magboltz for the transport of secondary electrons in the detector gas. We validated the simulation by comparing spectra and modulation curves with actual data taken with radioactive sources and an X-ray generator. We also estimated the in-orbit background induced by cosmic radiation in low Earth orbit.

Kitaguchi, Takao; Tamagawa, Toru; Hayato, Asami; Enoto, Teruaki; Yoshikawa, Akifumi; Kaneko, Kenta; Takeuchi, Yoko; Black, Kevin; Hill, Joanne; Jahoda, Keith; Krizmanic, John; Sturner, Steven; Griffiths, Scott; Kaaret, Philip; Marlowe, Hannah

2014-07-01

360

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904

Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan

2011-01-01

361

During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.

Leyva, A.; Pinera, I.; Abreu, Y.; Cruz, C. M. [Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear, calle 30, esq. 5ta, No. 502, Miramar, Playa, C. Habana, Cuba, P.O. Box. 6122 (Cuba); Montano, L. M. [Centro de Investigaciones y Estudios Avanzados, D.F., Mexico, P.O. Box. 14-740 (Mexico)

2008-08-11

362

McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields

J. K. Shultis; R. E. Faw; M. H. Stedry; W. Hall

1994-01-01

363

We present a combined experimental and theoretical investigation into the charge transport and recombination in dye-sensitized mesoporous TiO2. We electronically probe the photoinduced change in conductivity through in-plane devices while simultaneously optically probing signatures of the charge species. Our quasi-continuous wave technique allows us to build data sets of electron mobility and recombination versus charge density over a wide temperature range. We observe that the charge density dependence of mobility in TiO2 is strong at high temperatures and gradually reduces with reducing temperature, to an extent where at temperatures below 260 K the mobility is almost independent of charge density. The mobility first increases and then decreases with reducing temperature at any given charge density. These observed trends are surprising and consistent with the multiple-trapping model for charge transport only if the trap density-of-states (DoS) is allowed to become less deep and narrower as the temperature reduces. Our recombination measurements and simulations over a broad range of charge density and temperature are also consistent with the above-mentioned varying DoS function when the recombination rate constant is allowed to increase with temperature, itself consistent with a thermally activated charge-transfer process. Further to using the Monte Carlo simulations to model the experimental data, we use the simulations to aid our understanding of the limiting factors to charge transport and recombination. According to our model, we find that the charge recombination is mainly governed by the recombination reaction rate constant and the charge density dependence is mainly a result of the bimolecular nature of the recombination process. The implication to future material design is that if the mobility can be enhanced without increasing the charge density in the film, for instance by reducing the average trap depth, then this will not be at the sacrifice of comparably enhanced recombination and it will greatly increase the charge carrier diffusion lengths in dye-sensitized or mesoscopic solar cells. PMID:18767840

Petrozza, Annamaria; Groves, Chris; Snaith, Henry J

2008-10-01

364

Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 for Monte Carlo applications to neutron transport problems; Lewis & Miller [1984] is a good source

Stanford University

365

Manufacturing of miniaturized high activity (192)Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized (60)Co HDR sources have been made available with identical dimensions to those of (192)Ir sources. (60)Co sources have an advantage of longer half life while comparing with (192)Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG (60)Co HDR and new microSelectron (192)Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSlectron HDR (192)Ir v2 and new BEBIG (60)Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698×10(-8) ± 0.55% U Bq(-1) and 3.039×10(-7) ± 0.41% U Bq(-1) for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h(-1)U(-1) and 1.097±0.12% cGy h(-1)U(-1), respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG (60)Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG (60)Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using appropriate cut-off energy. No significant advantages or disadvantages are found in dosimetric aspect comparing with two sources. PMID:23293454

Islam, M Anwarul; Akramuzzaman, M M; Zakaria, G A

2012-10-01

366

??The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed… (more)

Burns, Kimberly Ann

2009-01-01

367

Dysregulation of noradrenergic function has been implicated in a variety of psychiatric and neurodegenerative disorders, including depression and Alzheimer's disease. The noradrenaline transporter (NAT) is a major target for antidepressant drugs, including reboxetine, a selective noradrenaline reuptake inhibitor. Therefore, the development of a radiotracer for imaging of the NAT is desirable. In this study, NKJ64, a novel iodinated analog of reboxetine, was radiolabeled and evaluated as a potential single photon emission computerized tomography (SPECT) radiotracer for imaging the NAT in brain. Biological evaluation of the novel radiotracer, ¹²³/¹²?I-NKJ64, was carried out in rats using: in vitro ligand binding assays; in vitro and ex vivo autoradiography; in vivo biodistribution studies and ex vivo pharmacological blocking studies. ¹²?I-NKJ64 displayed saturable binding with high affinity for NAT in cortical homogenates (K(D) = 4.82 ± 0.87 nM, mean ± SEM, n = 3). In vitro and ex vivo autoradiography showed the regional distribution of ¹²³I-NKJ64 binding to be consistent with the known density of NAT in brain. Following i.v. injection there was rapid uptake of ¹²³I-NKJ64 in brain, with maximum uptake of 2.93% ± 0.14% (mean ± SEM, n = 3) of the injected dose. The specific to nonspecific ratio (locus coeruleus:caudate putamen) of ¹²³I-NKJ64 uptake measured by ex vivo autoradiography was 2.8 at 30 min post i.v. injection. The prior administration of reboxetine significantly reduced the accumulation of ¹²³I-NKJ64 in the locus coeruleus (>50% blocking). The data indicate that further evaluation of ¹²³I-NKJ64 in nonhuman primates is warranted in order to determine its utility as a SPECT radiotracer for imaging of NAT in brain. PMID:21157929

Tavares, Adriana Alexandre S; Jobson, Nicola K; Dewar, Deborah; Sutherland, Andrew; Pimlott, Sally L

2011-07-01

368

Parallelizing Monte Carlo with PMC

PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

1994-11-01

369

Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.

Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P. [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece); Department of Physics, Nuclear and Particle Physics Section, University of Athens, Ilisia, 157 71 Athens (Greece); Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece)

2013-01-15

370

NASA Astrophysics Data System (ADS)

The application of topology, the mathematics of conserved properties under continuous deformations, is creating a range of new opportunities throughout photonics. This field was inspired by the discovery of topological insulators, in which interfacial electrons transport without dissipation, even in the presence of impurities. Similarly, the use of carefully designed wavevector-space topologies allows the creation of interfaces that support new states of light with useful and interesting properties. In particular, this suggests unidirectional waveguides that allow light to flow around large imperfections without back-reflection. This Review explains the underlying principles and highlights how topological effects can be realized in photonic crystals, coupled resonators, metamaterials and quasicrystals.

Lu, Ling; Joannopoulos, John D.; Solja?i?, Marin

2014-11-01

371

Topology is revolutionizing photonics, bringing with it new theoretical discoveries and a wealth of potential applications. This field was inspired by the discovery of topological insulators, in which interfacial electrons transport without dissipation even in the presence of impurities. Similarly, new optical mirrors of di?fferent wave-vector space topologies have been constructed to support new states of light propagating at their interfaces. These novel waveguides allow light to flow around large imperfections without back-reflection. The present review explains the underlying principles and highlights the major findings in photonic crystals, coupled resonators, metamaterials and quasicrystals.

Lu, Ling; Solja?i?, Marin

2014-01-01

372

NASA Astrophysics Data System (ADS)

The goal of this work was to examine the use of simplified diode detector models within a recently proposed Monte Carlo (MC) based small field dosimetry formalism and to investigate the influence of electron source parameterization has on MC calculated correction factors. BEAMnrc was used to model Varian 6?MV jaw-collimated square field sizes down to 0.5 cm. The IBA stereotactic field diode (SFD), PTW T60016 (shielded) and PTW T60017 (un-shielded) diodes were modelled in DOSRZnrc and isocentric output ratios (OR_{\\det _{MC} }^{f_{clin} }) calculated at depths of d = 1.5, 5.0 and 10.0 cm. Simplified detector models were then tested by evaluating the percent difference in OR_{det_{MC} }^{f_{clin} }between the simplified and complete detector models. The influence of active volume dimension on simulated output ratio and response factor was also investigated. The sensitivity of each MC calculated replacement correction factor (\\mathop k\

Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.

2012-08-01

373

Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

NASA Astrophysics Data System (ADS)

In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

1997-06-01

374

Deterministic theory of Monte Carlo variance

The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validate this theory.

Ueki, T.; Larsen, E.W. [Univ. of Michigan, Ann Arbor, MI (United States)

1996-12-31

375

The goal of this work was to examine the use of simplified diode detector models within a recently proposed Monte Carlo (MC) based small field dosimetry formalism and to investigate the influence of electron source parameterization has on MC calculated correction factors. BEAMnrc was used to model Varian 6?MV jaw-collimated square field sizes down to 0.5 cm. The IBA stereotactic field diode (SFD), PTW T60016 (shielded) and PTW T60017 (un-shielded) diodes were modelled in DOSRZnrc and isocentric output ratios (OR(fclin)(detMC)) calculated at depths of d = 1.5, 5.0 and 10.0 cm. Simplified detector models were then tested by evaluating the percent difference in (OR(fclin)(detMC)) between the simplified and complete detector models. The influence of active volume dimension on simulated output ratio and response factor was also investigated. The sensitivity of each MC calculated replacement correction factor (k(fclin,fmsr)(Qclin,Qmsr)), as a function of electron FWHM between 0.100 and 0.150 cm and energy between 5.5 and 6.5 MeV, was investigated for the same set of small field sizes using the simplified detector models. The SFD diode can be approximated simply as a silicon chip in water, the T60016 shielded diode can be modelled as a chip in water plus the entire shielding geometry and the T60017 unshielded diode as a chip in water plus the filter plate located upstream. The detector-specific (k(fclin,fmsr)(Qclin,Qmsr)), required to correct measured output ratios using the SFD, T60016 and T60017 diode detectors are insensitive to incident electron energy between 5.5 and 6.5 MeV and spot size variation between FWHM = 0.100 and 0.150 cm. Three general conclusions come out of this work: (1) detector models can be simplified to produce OR(fclin)(detMC) to within 1.0% of those calculated using the complete geometry, where typically not only the silicon chip, but also any high density components close to the chip, such as scattering plates or shielding material is necessary to be included in the model, (2) diode detectors of smaller active radius require less of a correction and (3) (k(fclin,fmsr)(Qclin,Qmsr)) is insensitive to the incident the electron energy and spot size variations investigated. Therefore, simplified detector models can be used with acceptable accuracy within the recently proposed small field dosimetry formalism. PMID:22842678

Cranmer-Sargison, G; Weston, S; Evans, J A; Sidhu, N P; Thwaites, D I

2012-08-21

376

NASA Astrophysics Data System (ADS)

Amphiphiles, under appropriate conditions, can self-assemble into nanoscale thin membrane vessels (vesicles) that encapsulate and hence protect and transport molecular payloads. Vesicles assemble naturally within cells but can also be artificially synthesized. In this article, we review the mechanisms and applications of light-field interactions with vesicles. By being associated with light-emitting entities (e.g., dyes, fluorescent proteins, or quantum dots), vesicles can act as imaging agents in addition to cargo carriers. Vesicles can also be optically probed on the basis of their nonlinear response, typically from the vesicle membrane. Light fields can be employed to transport vesicles by using optical tweezers (photon momentum) or can directly perturb the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy). We conclude with emerging vesicle applications in biology and photochemical microreactors.

Vasdekis, A. E.; Scott, E. A.; Roke, S.; Hubbell, J. A.; Psaltis, D.

2013-07-01

377

pairs of particles are produced. In photon-photon colliders.PARTICLE PHYSICS AND EXPERIMENTAL DETECfORSt The Structure functions of the photon,Particle Accelerator Conference 1995, Dallas, TX, May 1-5, 1995, and to be published in the Proceedings Photon-

Sessler, Andrew M.

2008-01-01

378

Multivariate Monte Carlo Model Fitting

NASA Astrophysics Data System (ADS)

We present a new method for analyzing multi-dimensional data. The method uses an astrophysical and instrument response Monte Carlo to simulate photons and then iteratively analyze the data. The simulated photons are then compared directly with the measured values for the data with a new multivariate generalization of the Cramér-von Mises and Kolmogorov-Smirnov statistic. Techniques for model fitting, error estimation, and deconvolution using this method are discussed. Examples of this approach using Chandra observations of X-ray clusters of galaxies and XMM-Newton Reflection Grating Spectrometer data are presented.

Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.

2000-05-01

379

Monte Carlo techniques in medical radiation physics

The author's main purpose is to review the techniques and applications of the Monte Carlo method in medical radiation physics since Raeside's review article in 1976. Emphasis is given to applications where proton and\\/or electron transport in matter is simulated. Some practical aspects of Monte Carlo practice, mainly related to random numbers and other computational details, are discussed in connection

P. Andreo

1991-01-01

380

Monte Carlo Application ToolKit (MCATK)

NASA Astrophysics Data System (ADS)

The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results.

Adams, Terry; Nolen, Steve; Sweezy, Jeremy; Zukaitis, Anthony; Campbell, Joann; Goorley, Tim; Greene, Simon; Aulwes, Rob

2014-06-01

381

Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays

NASA Technical Reports Server (NTRS)

Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.

Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.

2009-01-01

382

The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

Burns, Kimberly A.

2009-08-01

383

Monte Carlo methods Sequential Monte Carlo

Monte Carlo methods Sequential Monte Carlo A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 1 / 85 #12;Generic Problem Consider a sequence of probability distributions, Fn = Fn 1 F. A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 2 / 85 #12;Generic Problem

Doucet, Arnaud

384

, and Nelson Tansu Abstract--A theoretical and experimental study demonstrates that the current injection of a theoretical and experimental study on the impact of carrier transport on for InGaAsN QW lasers, providing of hole transport on the temperature dependence of the external differential quantum efficiency and above

Gilchrist, James F.

385

Monte Carlo Simulator to Study High Mass X-Ray Binary System

We have developed a Monte Carlo simulator for astrophysical objects, which incorporate the transportation of X-ray photons in photoionized plasma. We applied the code to X-ray spectra of high mass X-ray binaries, Vela X-1 and GX 301-2, obtained with Chandra HETGS. By utilizing the simulator, we have successfully reproduced many emission lines observed from Vela X-1. The ionization structure and the matter distribution in the Vela X-1 system are deduced. For GX 301-2, we have derived the physical parameters of material surrounding the neutron star from fully resolved shape of the Compton shoulder in the iron K{alpha} line.

Watanabe, Shin; Nagase, Fumiaki; Takahashi, Tadayuki; /Sagamihara, Inst. Space Astron. Sci.; Sako, Masao; Kahn, Steve M.; /KIPAC, Menlo Park; Ishida, Manabu; Ishisaki,; /Tokyo Metropolitan U.; Paerels, Frederik; /Columbia U.

2005-07-08

386

Nuclear spectroscopy for in situ soil elemental analysis: Monte Carlo simulations

We developed a model to simulate a novel inelastic neutron scattering (INS) system for in situ non-destructive analysis of soil using standard Monte Carlo Neutron Photon (MCNP5a) transport code. The volumes from which 90%, 95%, and 99% of the total signal are detected were estimated to be 0.23 m{sup 3}, 0.37 m{sup 3}, and 0.79 m{sup 3}, respectively. Similarly, we assessed the instrument's sampling footprint and depths. In addition we discuss the impact of the carbon's depth distribution on sampled depth.

Wielopolski L.; Doron, O.

2012-07-01

387

NASA Astrophysics Data System (ADS)

The triple- and quadruple-escape peaks of 6.128 MeV photons from the F(p,??)19O16 nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code PENELOPE. The behaviour of the escape intensities was simulated for some gamma-ray energies and detector dimensions; the results obtained can be extended to other energies using an empirical function and statistical properties related to the phenomenon.

Maidana, N. L.; Brualla, L.; Vanin, V. R.; Oliveira, J. R. B.; Rizzutto, M. A.; do Nascimento, E.; Fernández-Varea, J. M.

2010-04-01

388

1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

T. EVANS; ET AL

2000-08-01

389

India, a country over one billion population has been facing serious difficulty of urban congestion and traffic jams since 1970's in her major cities. Public transport system in Mumbai has been overworking three times its capacity. Public transport system in Delhi, Colcutta and Chennai is also under strain. Elevated railway and underground railway could be options to support overworked surface

Makarand Gulawani

390

NASA Technical Reports Server (NTRS)

The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.

1976-01-01

391

Validation of the photon dose calculation model in the VARSKIN 4 skin dose computer code.

An updated version of the skin dose computer code VARSKIN, namely VARSKIN 4, was examined to determine the accuracy of the photon model in calculating dose rates with different combinations of source geometry and radionuclides. The reference data for this validation were obtained by means of Monte Carlo transport calculations using MCNP5. The geometries tested included the zero volume sources point and disc, as well as the volume sources sphere and cylinder. Three geometries were tested using source directly on the skin, source off the skin with an absorber material between source and skin, and source off the skin with only an air gap between source and skin. The results of these calculations showed that the non-volume sources produced dose rates that were in very good agreement with the Monte Carlo calculations, but the volume sources resulted in overestimates of the dose rates compared with the Monte Carlo results by factors that ranged up to about 2.5. The results for the air gap showed poor agreement with Monte Carlo for all source geometries, with the dose rates overestimated in all cases. The conclusion was that, for situations where the beta dose is dominant, these results are of little significance because the photon dose in such cases is generally a very small fraction of the total dose. For situations in which the photon dose is dominant, use of the point or disc geometries should be adequate in most cases except those in which the dose approaches or exceeds an applicable limit. Such situations will often require a more accurate dose assessment and may require the use of methods such as Monte Carlo transport calculations. PMID:23111523

Sherbini, Sami; Decicco, Joseph; Struckmeyer, Richard; Saba, Mohammad; Bush-Goddard, Stephanie

2012-12-01

392

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography.

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

393

Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

NASA Astrophysics Data System (ADS)

We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at

Chen, Jin; Fang, Qianqian; Intes, Xavier

2012-10-01

394

Progressive Photon Mapping Toshiya Hachisuka

Progressive Photon Mapping Toshiya Hachisuka UC San Diego Shinji Ogaki The University of Nottingham Henrik Wann Jensen UC San Diego Path tracing Bidirectional path tracing Metropolis light transport Photon mapping Progressive photon mapping Figure 1: A glass lamp illuminates a wall and generates a complex

Kazhdan, Michael

395

Dirac tensor with heavy photon

NASA Astrophysics Data System (ADS)

For the large-angle hard-photon emission by initial leptons in the process of high-energy annihilation of e + e - to hadrons, the Dirac tensor is obtained by taking the lowest-order radiative corrections into account. The case of large-angle emission of two hard photons by initial leptons is considered. In the final result, the kinematic case of collinear emission of hard photons and soft virtual and real photons is included; it can be used for the construction of Monte-Carlo generators.

Bytev, V. V.; Kuraev, E. A.; Scherbakova, E. S.

2013-03-01

396

Effect of irradiation with {gamma}-ray photons on the mechanism of charge transport in an n-CdS/p-CdTe heterostructure is considered. It is shown that the forward current-voltage characteristic of an n-CdS/p-CdTe heterostructure before and after irradiation is described by two exponential dependences: I = I{sub 01}exp(qV/C{sub 01}kT) and I = I{sub 02}exp(qV/C{sub 02}kT). It is found that, in the first portion of the current-voltage characteristic, the current is limited by thermoelectronic emission while, in the second portion, the current is limited by recombination of nonequilibrium charge carriers in the electrically neutral portion of a CdTe{sub 1-x}S{sub x} alloy at the n-CdS/p-CdTe heteroboundary. Anomalous dose dependences of parameters of the n-CdS/p-CdTe heterosystem are attributed to a variation in the degree of compensation of local centers at the CdS-CdTe{sub 1-x}S{sub x} interface and in the CdTe{sub 1-x}S{sub x} layers in relation to the dose of irradiation with {gamma}-ray photons.

Muzafarova, S. A., E-mail: samusu@rambler.ru; Mirsagatov, S. A., E-mail: mirsagatov@rambler.ru; Dzhamalov, F. N. [Academy of Sciences of the Republic of Uzbekistan, Physicotechnical Institute, Researh-and-Production Association Sun Physics (Uzbekistan)

2009-02-15

397

McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detector locations near the source.

Shultis, J.K.; Faw, R.E.; Stedry, M.H. [Kansas State Univ., Manhattan, KS (United States). Dept. of Nuclear Engineering; Hall, W. [Kansas State Univ., Manhattan, KS (United States)

1994-07-01

398

Monte Carlo analysis of CLAS data

We present a fit of the virtual-photon scattering asymmetry of polarized Deep Inelastic Scattering which combines a Monte Carlo technique with the use of a redundant parametrization based on Neural Networks. We apply the result to the analysis of CLAS data on a polarized proton target.

L. Del Debbio; A. Guffanti; A. Piccione

2008-06-30

399

Monte Carlo simulation for radiative kaon decays

For high precision measurements of K decays, the presence of radiated photons cannot be neglected. The Monte Carlo simulations must include the radiative corrections in order to compute the correct event counting and efficiency calculations. In this paper we briefly describe a method for simulating such decays.

C. Gatti

2005-07-25

400

Exact and efficient solution of the radiative transport equation for the semi-infinite medium

An accurate and efficient solution of the radiative transport equation is proposed for modeling the propagation of photons in the three-dimensional anisotropically scattering half-space medium. The exact refractive index mismatched boundary condition is considered and arbitrary rotationally invariant scattering functions can be applied. The obtained equations are verified with Monte Carlo simulations in the steady-state, temporal frequency, and time domains resulting in an excellent agreement. PMID:23774820

Liemert, Andre; Kienle, Alwin

2013-01-01

401

Visualization of transdermal permeant pathways is necessary to substantiate model-based conclusions drawn using permeability data. The aim of this investigation was to visualize the transdermal delivery of sulforhodamine B (SRB), a fluorescent hydrophilic permeant, and of rhodamine B hexyl ester (RBHE), a fluorescent hydrophobic permeant, using dual-channel two-photon microscopy (TPM) to better understand the transport pathways and the mechanisms of enhancement in skin treated with low-frequency ultrasound (US) and/or a chemical enhancer (sodium lauryl sulfate--SLS) relative to untreated skin (the control). The results demonstrate that (1) both SRB and RBHE penetrate beyond the stratum corneum and into the viable epidermis only in discrete regions (localized transport regions--LTRs) of US treated and of US/SLS-treated skin, (2) a chemical enhancer is required in the coupling medium during US treatment to obtain two significant levels of increased penetration of SRB and RBHE in US-treated skin relative to untreated skin, and (3) transcellular pathways are present in the LTRs of US treated and of US/SLS-treated skin for SRB and RBHE, and in SLS-treated skin for SRB. In summary, the skin is greatly perturbed in the LTRs of US treated and US/SLS-treated skin with chemical enhancers playing a significant role in US-mediated transdermal drug delivery. PMID:17554365

Kushner, Joseph; Kim, Daekeun; So, Peter T C; Blankschtein, Daniel; Langer, Robert S

2007-12-01

402

Realistic Monte Carlo simulation of Ga67 SPECT imaging

Describes a comprehensive Monte Carlo program tailored for efficient simulation of realistic Ga-67 SPECT imaging through the entire range of photon emission energies. The authors' approach incorporates several new features developed by them and by others. It is now being used to optimize and evaluate the performance of various methods of compensating for photon scatter, attenuation, and nonstationary distance- and

Stephen C. Moore; G. El Fakhri

2001-01-01

403

Highlights of the VIIIth International Workshop on Photon-Photon Collisions are reviewed. New experimental and theoretical results were reported in virtually every area of ..gamma gamma.. physics, particularly in exotic resonance production and tests of quantum chromodynamics where asymptotic freedom and factorization theorems provide predictions for both inclusive and exclusive ..gamma gamma.. reactions at high momentum transfer. 73 refs., 12 figs.

Brodsky, S.J.

1988-07-01

404

A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.

Charles A. Wemple; Joshua J. Cogliati

2005-04-01

405

The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various

T. M. Sutton; F. B. Brown; F. G. Bischoff; D. B. MacMillan; C. L. Ellis; J. T. Ward; C. T. Ballinger; D. J. Kelly; L. Schindler

1999-01-01

406

Coupled Monte Carlo neutral uid plasma simulation of Alcator C-Mod divertor plasma near detachment

Coupled Monte Carlo neutral Â± Â¯uid plasma simulation of Alcator C-Mod divertor plasma near Abstract Using the coupled Â¯uid plasma and Monte Carlo neutral transport code, B2-EIRENE, we simulate they are ionized. Coupled Monte Carlo neutral and Â¯uid plasma transport codes similar to those used successfully

Karney, Charles

407

J. Seguinot and T. Ypsilantis have recently described the theory and history of Ring Imaging Cherenkov (RICH) detectors. In this paper, I will expand on these excellent review papers, by covering the various photon detector designs in greater detail, and by including discussion of mistakes made, and detector problems encountered, along the way. Photon detectors are among the most difficult devices used in physics experiments, because they must achieve high efficiency for photon transport and for the detection of single photo-electrons. For gaseous devices, this requires the correct choice of gas gain in order to prevent breakdown and wire aging, together with the use of low noise electronics having the maximum possible amplification. In addition, the detector must be constructed of materials which resist corrosion due to photosensitive materials such as, the detector enclosure must be tightly sealed in order to prevent oxygen leaks, etc. The most critical step is the selection of the photocathode material. Typically, a choice must be made between a solid (CsI) or gaseous photocathode (TMAE, TEA). A conservative approach favors a gaseous photocathode, since it is continuously being replaced by flushing, and permits the photon detectors to be easily serviced (the air sensitive photocathode can be removed at any time). In addition, it can be argued that we now know how to handle TMAE, which, as is generally accepted, is the best photocathode material available as far as quantum efficiency is concerned. However, it is a very fragile molecule, and therefore its use may result in relatively fast wire aging. A possible alternative is TEA, which, in the early days, was rejected because it requires expensive CaF{sub 2} windows, which could be contaminated easily in the region of 8.3 eV and thus lose their UV transmission.

Va`vra, J.

1995-10-01

408

State-of-the-art Monte Carlo 1988

Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

Soran, P.D.

1988-06-28

409

Graduiertenschule Hybrid Monte Carlo

Graduiertenschule Hybrid Monte Carlo SS 2005 Heermann - UniversitÂ¨at Heidelberg Seite 1 #12;Graduiertenschule Â· In conventional Monte-Carlo (MC) calculations of condensed matter systems, such as an N probability distribution, unlike Monte-Carlo calculations. Â· The Hybrid Monte-Carlo (HMC) method combines

Heermann, Dieter W.

410

Treating electron transport in MCNP{sup trademark}

The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. The theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.

Hughes, H.G.

1996-12-31

411

A Monte-Carlo maplet for the study of the optical properties of biological tissues

NASA Astrophysics Data System (ADS)

Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.

Yip, Man Ho; Carvalho, M. J.

2007-12-01

412

The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.

Burns, T.D. Jr.

1996-05-01

413

Prompt-photon production in DIS

Prompt-photon cross sections in deep inelastic ep scattering were measured with the ZEUS detector at HERA using an integrated luminosity of 320pb^-1. Measurements of differential cross sections are presented for inclusive prompt-photon production as a function of Q^2, x, E_T and eta. Perturbative QCD predictions and Monte Carlo predictions are compared to the measurements.

Matthew Forrest

2009-09-22

414

Monte Carlo tools to supplement experimental microdosimetric spectra.

Tissue-equivalent proportional counters (TEPCs) are widely used in experimental microdosimetry for characterising the radiation quality in radiation protection and radiation therapy environments. Generally, TEPCs are filled with tissue-equivalent gas mixtures, at low gas pressure, to simulate tissue site sizes similar to the cell nucleus (1 or 2 µm). The TEPC response using Monte Carlo (MC) codes can be applied to supplement experimental measurements. Most of general-purpose MC codes currently available recourse to the condensed-history approach to model the electron transport and do not transport low-energy electrons (<1 keV), which can lead to systematic errors, especially in thin layers and in gas-condensed medium interfaces. In this work, a comparison between experimental microdosimetric spectra of (60)Co and (137)Cs radiation at different simulated sizes (from 1.0 to 3.0 ?m) in pure propane versus simulated spectra obtained with two general-purpose codes FLUKA and PENELOPE, which include a detailed simulation of electron-photon transport in arbitrary materials, including gases, is presented. PMID:24132390

Chiriotti, S; Moro, D; Conte, V; Colautti, P; D'Agostino, E; Sterpin, E; Vynckier, S

2014-10-01

415

The long term human exploration goals that NASA has embraced, requires the need to understand the primary radiation and secondary particle production under a variety of environmental conditions. In order to perform accurate transport simulations for the incident particles found in the space environment, accurate nucleus-nucleus inelastic event generators are needed, and NASA is funding their development. For the first

K. T. Lee

2007-01-01

416

COMET-PE: an incident fluence response expansion transport method for radiotherapy calculations

NASA Astrophysics Data System (ADS)

Accurate dose calculation is a central component of radiotherapy treatment planning. A new method of dose calculation has been developed based on transport theory and validated by comparison to Monte Carlo methods. The coarse mesh transport method has been extended to allow coupled photon-electron transport in 3D. The method combines stochastic pre-computation with a deterministic solver to achieve high accuracy and precision. To enhance the method for radiotherapy calculations, a new angular basis was derived, and an analytical source treatment was developed. Validation was performed by comparison to DOSXYZnrc using a heterogeneous interface phantom composed of water, aluminum, and lung. Calculations of both kinetic energy rele